Search Results

Search found 93284 results on 3732 pages for 'virtual server 2005 r2'.

Page 235/3732 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • SQL server 2008 trigger not working correct with multiple inserts

    - by Rob
    I've got the following trigger; CREATE TRIGGER trFLightAndDestination ON checkin_flight AFTER INSERT,UPDATE AS BEGIN IF NOT EXISTS ( SELECT 1 FROM Flight v INNER JOIN Inserted AS i ON i.flightnumber = v.flightnumber INNER JOIN checkin_destination AS ib ON ib.airport = v.airport INNER JOIN checkin_company AS im ON im.company = v.company WHERE i.desk = ib.desk AND i.desk = im.desk ) BEGIN RAISERROR('This combination of of flight and check-in desk is not possible',16,1) ROLLBACK TRAN END END What i want the trigger to do is to check the tables Flight, checkin_destination and checkin_company when a new record for checkin_flight is added. Every record of checkin_flight contains a flightnumber and desknumber where passengers need to check in for this destination. The tables checkin_destination and checkin_company contain information about companies and destinations restricted to certain checkin desks. When adding a record to checkin_flight i need information from the flight table to get the destination and flightcompany with the inserted flightnumber. This information needs to be checked against the available checkin combinations for flights, destinations and companies. I'm using the trigger as stated above, but when i try to insert a wrong combination the trigger allows it. What am i missing here?

    Read the article

  • problem with join SQL Server 2000

    - by eyalb
    I have 3 tables - Items, Props, Items_To_Props i need to return all items that match all properties that i send example items 1 2 3 4 props T1 T2 T3 items_to_props 1 T1 1 T2 1 T3 2 T1 3 T1 when i send T1,T2 i need to get only item 1

    Read the article

  • SQL Server 2008 - Query takes forever to finish even though work is actually done

    - by Brian
    Running the following simple query in SSMS: UPDATE tblEntityAddress SET strPostCode= REPLACE(strPostCode,' ','') The update to the data (at least in memory) is complete in under a minute. I verified this by performing another query with transaction isolation level read uncommitted. The update query, however, continues to run for another 30 minutes. What is the issue here? Is this caused by a delay to write to disk? TIA

    Read the article

  • SQL Server 2008 Stored Procedure

    - by user238319
    I cannot store the date data type variables using stored procedure. My code is: ALTER PROCEDURE [dbo].[Access1Register] -- Add the parameters for the stored procedure here @MobileNumber int, @CitizenName varchar(50), @Dob char(8), @VerificationCode int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here select CAST(@dob As DATE) Insert Into Access1 (MobileNo,CitizenName,Dob,VerificationCode) values(@MobileNumber,@CitizenName,@Dob,@VerificationCode) go If I exec this procedure it is executing, but there is an error occured in the date type variable. It's raising the error as invalid item '-'.

    Read the article

  • SQL Server 2008 Problem with SCOPE_IDENTITY()

    - by jinsungy
    My code does not update the thread field. It is null. Anyone have any ideas? INSERT INTO [Messages]([Sender], [Receiver], [Job_Number], [Subject], [MessageText], [DateSent]) VALUES(@Sender, @Receiver, @Job_Number, @Subject, @MessageText, @DateSent) SET @ThreadID = SCOPE_IDENTITY() UPDATE [Messages] SET Thread = @ThreadID WHERE MessageID = @ThreadID

    Read the article

  • updating only date part from datetime in sql server 2000

    - by user294146
    hi Experts, I have data in the table like the following. col1 col2 col3 -------------------------------------------------------- 6/5/2010 18:05:00 6/2/2010 10:05:00 Null 6/8/2010 15:05:00 6/3/2010 10:45:00 6/5/2010 11:05:00 6/3/2010 15:05:00 Null 6/7/2010 12:05:00 6/1/2010 15:05:00 6/3/2010 10:45:00 6/1/2010 14:05:00 what my requirement is I want to update the date of there columns with single date without disturbing the time. say for example I want to update the table data with 6/1/2010 where the field data is not null. please let me know the query for updating the table data. thanks & regards, murali

    Read the article

  • SQL SERVER Spatial Data

    - by Sam
    Hi All, I am struggeling finding an effectient way to find a distance between a Point that interetcts a polygon and the border of that polygon. I was able to use the STDistance comparing the point to every point that made up the polygon but that is taking a lot of time. Using SPatial indexed wasnt much helpful because the STDistance is not part of any constraint and even when I did put the constraint, the index didnt help much. I appreciate any feedback. Thanks.

    Read the article

  • SQL SERVER – Find Max Worker Count using DMV – 32 Bit and 64 Bit

    - by pinaldave
    During several recent training courses, I found it very interesting that Worker Thread is not quite known to everyone despite the fact that it is a very important feature. At some point in the discussion, one of the attendees mentioned that we can double the Worker Thread if we double the CPU (add the same number of CPU that we have on current system). The same discussion has triggered this quick article. Here is the DMV which can be used to find out Max Worker Count SELECT max_workers_count FROM sys.dm_os_sys_info Let us run the above query on my system and find the results. As my system is 32 bit and I have two CPU, the Max Worker Count is displayed as 512. To address the previous discussion, adding more CPU does not necessarily double the Worker Count. In fact, the logic behind this simple principle is as follows: For x86 (32-bit) upto 4 logical processors  max worker threads = 256 For x86 (32-bit) more than 4 logical processors  max worker threads = 256 + ((# Procs – 4) * 8) For x64 (64-bit) upto 4 logical processors  max worker threads = 512 For x64 (64-bit) more than 4 logical processors  max worker threads = 512+ ((# Procs – 4) * 8) In addition to this, you can configure the Max Worker Thread by using SSMS. Go to Server Node >> Right Click and Select Property >> Select Process and modify setting under Worker Threads. According to Book On Line, the default Worker Thread settings are appropriate for most of the systems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • SQL SERVER – A Puzzle – Illusion – Confusion – April Fools’ Day

    - by pinaldave
    Today is April 1st and just like every other year, I like to bring something interesting and light for the day. Atleast there should be days in every one’s life when they should feel easy. Here is a quick puzzle for you and I believe it will make you feel extremely smart if you can figure out the result behind the same. Run following in SQL Server Management Studio and observe the output: SELECT 30.0/(-2.0)/5.0; SELECT 30.0/-2.0/5.0; Here are few questions for you: 1) What will be the result of above two queries? 2) Why? If you think you can figure out the result without executing them – I encourage you to execute BOTH of them in SSMS and see if they give you same result or different result. Well, now I am waiting for your answer here – why? I often post similar things on my facebook page http://facebook.com/SQLAuth – you are welcome to play with me there. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Precision of SMALLDATETIME – A 1 Minute Precision

    - by pinaldave
    I am myself surprised that I am writing this post today. I am going to present one of the very known facts of SQL Server SMALLDATETIME datatype. Even though this is a very well-known datatype, many a time, I have seen developers getting confused with precision of the SMALLDATETIME datatype. The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. Let us see the following example DECLARE @varSDate AS SMALLDATETIME SET @varSDate = '1900-01-01 12:12:01' SELECT @varSDate C_SDT SET @varSDate = '1900-01-01 12:12:29' SELECT @varSDate C_SDT SET @varSDate = '1900-01-01 12:12:30' SELECT @varSDate C_SDT SET @varSDate = '1900-01-01 12:12:59' SELECT @varSDate C_SDT Following is the result of the above script and note that any value between 0 (zero) and 59 is converted up or down. The part that confuses the developers is the value of the seconds in the display. I think if it is not maintained or recorded, it should not be displayed as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server Performance Series Hyderabad / Pune – Nov/Dec 2010

    - by pinaldave
    Just a quick note that SQL Server Performance Tuning and Optimizations Seminar series which I am offering at Hyderabad and Pune are almost all sold out. Read the details of the earlier successful seminar conducted at Colombo, Sri Lanka over here. Hyderabad Nov 27-28, 2010 (Last 3 Seats Left) Best Western Amrutha Castle 5-9-16, Opp. Secretriat, Saifabad, Khairatabad Hyderabad, Andhra Pradesh Pune Dec 04-05, 2010 (Last 6 Seats Left) Location TBA as we are looking for larger capacity room. I promise that this is going to be great fun as this sessions are very different then any usual sessions you have ever attended. This sessions are absolutely interactive and all the attendees will feel part of the event. As larger group are not convenient we are limited this seminars to very small group of people. This way attendees can go to instructors any time and feel connected. This 2-day seminar will cover the best of the best concepts and practices from popular courses offered by Solid Quality Mentors. Instead of learning theory only, the seminar focuses on providing real world experience by using demos and scenarios derived from customer engagements. The seminar is uniquely structured and well-thought-out. Sessions are discussion- based and are designed to be an interactive gateway between the instructor and the participants for an optimal learning experience. The seminar is intended to be immersion-based where participants will have plenty of opportunities to get deeply involved in the concepts presented by the instructor. Agenda of the event To join the seminars drop me an email. My email address is pinal “at” SQLAuthority.com and IndiaInfo “at” SolidQ.com. If you specify SQLAuthority.com in Title, you will avail special discount in overall rates on specified price. Yes, a sure 20% I promise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Solution – Puzzle – Challenge – Error While Converting Money to Decimal

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal Run following code in SSMS: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(5,2)) MoneyInt; GO Above code will give following error: Msg 8115, Level 16, State 8, Line 3 Arithmetic overflow error converting money to data type numeric. Why and what is the solution? Solution is as following: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(7,2)) MoneyInt; GO There were more than 20 valid answers. Here is the reason. Decimal data type is defined as Decimal (Precision, Scale), in other words Decimal (Total digits, Digits after decimal point).. Precision includes Scale. So Decimal (5,2) actually means, we can have 3 digits before decimal and 2 digits after decimal. To accommodate 12345.67 one need higher precision. The correct answer would be DECIMAL (7,2) as it can hold all the seven digits. Here are the list of the experts who have got correct answer and I encourage all of you to read the same over hear. Fbncs Piyush Srivastava Dheeraj Abhishek Anil Gurjar Keval Patel Rajan Patel Himanshu Patel Anurodh Srivastava aasim abdullah Paulo R. Pereira Chintak Chhapia Scott Humphrey Alok Chandra Shahi Imran Mohammed SHIVSHANKER The very first answer was provided by Fbncs and Dheeraj had very interesting comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – NuoDB in Sixty Seconds – SQL in Sixty Seconds #053

    - by Pinal Dave
    Earlier this week, I have done five part blog series on NuoDB and it was very well received by audience. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. t is an operational NewSQL database built on a patented emergent architecture with full support for SQL and ACID guarantees. In this blog post, I will explore how one can download and install NuoDB database. In this video I explain how one can install NuoDB in very few seconds and set up the entire environment in additional few seconds. One can get going with installation of NuoDB and sample database in total of less than 60 seconds. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download NuoDB and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

  • SQL SERVER – Find First Non-Numeric Character from String

    - by pinaldave
    It is fun when you have to deal with simple problems and there are no out of the box solution. I am sure there are many cases when we needed the first non-numeric character from the string but there is no function available to identify that right away. Here is the quick script I wrote down using PATINDEX. The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Use of PATINDEX SELECT PATINDEX('%[^0-9]%',Col1) 'Position of NonNumeric Character', SUBSTRING(Col1,PATINDEX('%[^0-9]%',Col1),1) 'NonNumeric Character', Col1 'Original Character' FROM MyTable GO DROP TABLE MyTable GO Here is the resultset: Where do I use in the real world – well there are lots of examples. In one of the future blog posts I will cover that as well. Meanwhile, do you have any better way to achieve the same. Do share it here. I will write a follow up blog post with due credit to you. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • RAID5 over LVM on Ubuntu Server 12.04.3

    - by April Ethereal
    I'm trying to create a RAID5 software array using LVM. I use VirtualBox as I'm only learning how LVM works. So I've created 4 virtual SCSI drives and then did the following: pvcreate /dev/sd[b-e] vgcreate /dev/sd[b-e] raid5_vg lvcreate --type raid5 -i 3 -L 1G -n raid_lv raid5_vg However, I get an error after the last command: WARNING: Unrecognised segment type raid5 Using default stripesize 64.00 KiB Rounding size (256 extents) up to stripe boundary size (258 extents) Cannot update volume group raid5_vg with unknown segments in it! So it looks like raid5 is not a valid segment type. "lvm segtypes" also doesn't contain 'raid5' entry: root@ubuntu-lvm:~# lvm segtypes striped zero error free snapshot mirror So my question is - how could I create RAID5 logical volume using LVM only? It seems that it is possible, I saw a few references (not for Ubuntu, unfortunately) for RedHat and Gentoo systems. I don't want to use mdadm for now, until I find out that it is mandatory. Some info about my system is below: root@ubuntu-lvm:~# uname -a Linux ubuntu-lvm 3.8.0I use Ubuntu Server 12.04.3 (i686)-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013 i686 i686 i386 GNU/Linux root@ubuntu-lvm:~# dpkg -l | grep lvm ii lvm2 2.02.66-4ubuntu7.3 The Linux Logical Volume Manager Thanks.

    Read the article

  • SQL SERVER – 2011 – SEQUENCE is not IDENTITY

    - by pinaldave
    Yesterday I posted blog post on the subject SQL SERVER – 2011 – Introduction to SEQUENCE – Simple Example of SEQUENCE and I received comment where user was not clear about difference between SEQUENCE and IDENTITY. The reality is that SEQUENCE not like IDENTITY. There is very clear difference between them. Identity is about single column. Sequence is always incrementing and it is not dependent on any table. Here is the quick example of the same. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO -- Run five times SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; GO -- Clean Up DROP SEQUENCE [Seq] GO Here is the resultset. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Contest Winner – What Next on SQL in Sixty Seconds – Poll Result

    - by Pinal Dave
    A few days ago, I have asked a question on this blog. The question was - What would you like to see in the next episodes of SQL in Sixty Seconds. The poll is still active and posted over here: SQL SERVER – Poll – What would you love to see in SQL in Sixty Seconds? The contest was to suggest the next item of SQL in Sixty Seconds and vote for the your choice of subject. There have been plenty of votes to this contest, however, there were only 4 comments to this blog post. Hence, selecting a winner was very simple. Result of Poll It is very clear from result, most of the people would like to watch Performance Tuning subjects. I will continue to build video on this subject in future. Contest Winner Now is the time for the winner of the contest, who left comments on the blog. The winner is Raelyard. Here is the comment which he has left on the blog. raelyard please reach out to me via email and I will send you the gift card. Current Contest Here is the contest which is currently running on this blog. You can take part in the contest and can win a Drone. SQL in Sixty Seconds Here are few of the episodes of SQL in Sixty Seconds, which you can watch. We will have more episodes of SQL in Sixty Seconds from next week which are focused on performance. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Video

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • SQL SERVER – Table Variables and Transactions – SQL in Sixty Seconds #007 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Misconception and Resolution. Quite often I have seen people getting confused with certain behavior of the T-SQL. They expect SQL to behave certain way and SQL Server behave differently. This kind of issue often creates confusion and frustration. Sometime I have seen them also confusing it with bug and submitting the bug, where reality is totally different. Similar concept which are going to see today. I have seen quite commonly developer assuming that table various will be rolled back when transaction is rolled back. This sixty seconds video describes that table various are not rolled back when transactions are rolled back. More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >