Search Results

Search found 9 results on 1 pages for 'sqlperformance'.

Page 1/1 | 1 

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

  • sql perfomance on new server

    - by Rapunzo
    My database is running on a pc (AMD Phenom x6, intel ssd disk, 8GB DDR3 RAM and windows 7 OS + sql server 2008 R2 sp3 ) and it started working hard, timeout problems and up to 30 seconds long queries after 200 mb of database And I also have an old server pc (IBM x-series 266: 72*3 15k rpm scsi discs with raid5, 4 gb ram and windows server 2003 + sql server 2008 R2 sp3 ) and same query start to give results in 100 seconds.. I tried query analyser tool for tuning my indexed. but not so much improvements. its a big dissapointment for me. because I thought even its an old server pc it should be more powerfull with 15k rpm discs with raid5. what should I do. do I need $10.000 new server to get a good performance for my sql server? cant I use that IBM server? Extra information: there is 50 sql users and its an ERP program. There is my query ALTER FUNCTION [dbo].[fnDispoTerbiye] ( ) RETURNS TABLE AS RETURN ( SELECT MD.dispoNo, SV.sevkNo, M1.musteriAdi AS musteri, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SUM(T.topMetre) AS toplamSevkMetre, MD.dispoMetresi, DT.gelisMetresi, ISNULL(DT.fire, 0) AS fire, SV.sevkTarihi, DT.gelisTarihi, SP.mamulTermin, SD.miktar AS siparisMiktari, M.musteriAdi AS boyahane, MD.akisNotu AS islemler, --dbo.fnAkisIslemleri(MD.dispoNo) DT.partiNo, DT.iplikBoyaId, B.tanimAd AS BoyaTuru, MAX(HD.hamEn) AS hamEn, MAX(HD.hamGramaj) AS hamGramaj, TS.mamulEn, TS.mamulGramaj, DT.atkiCekmesi, DT.cozguCekmesi, DT.fiyat, DV.dovizCins, DT.dovizId, (SELECT CASE WHEN DT.dovizId = 2 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 2 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 3 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 3 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 1 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 1 ORDER BY tarih DESC), 2) AS numeric(18, 2)) END AS Expr1) AS ToplamTLfiyat, DT.aciklama, MD.dispoNotu, SD.siparisId, SD.siparisDetayId, DT.sqlUserName, DT.kayitTarihi, O.orguAd, 'Çözgü=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 1) AS Expr1) + ')' + ' Atki=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 2) AS Expr1) + ')' AS iplikAciklama, DT.prosesOk, dbo.[fnYikamaTalimat](SP.siparisId) yikamaTalimati FROM tblDoviz AS DV WITH(NOLOCK) INNER JOIN tblDispoTerbiye AS DT WITH(NOLOCK) INNER JOIN tblTanimlar AS B WITH(NOLOCK) ON DT.iplikBoyaId = B.tanimId AND B.tanimTurId = 2 ON DV.id = DT.dovizId RIGHT OUTER JOIN tblMusteri AS M1 WITH(NOLOCK) INNER JOIN tblSiparisDetay AS SD WITH(NOLOCK) INNER JOIN tblDispo AS MD WITH(NOLOCK) ON SD.siparisDetayId = MD.siparisDetayId INNER JOIN tblTipTur AS TT WITH(NOLOCK) ON SD.tipTurId = TT.tipTurId INNER JOIN tblSiparis AS SP WITH(NOLOCK) ON SD.siparisId = SP.siparisId ON M1.musteriNo = SP.musteriNo INNER JOIN tblTip AS TP WITH(NOLOCK) ON SD.tipTurId = TP.tipTurId AND SD.tipNo = TP.tipNo AND SD.desenNo = TP.desen AND SD.varyantNo = TP.varyant INNER JOIN tblOrgu AS O WITH(NOLOCK) ON TP.orguId = O.orguId INNER JOIN tblMusteri AS M WITH(NOLOCK) INNER JOIN tblSevkiyat AS SV WITH(NOLOCK) ON M.musteriNo = SV.musteriNo INNER JOIN tblSevkDetay AS SVD WITH(NOLOCK) ON SV.sevkNo = SVD.sevkNo ON MD.mamulDispoHamSevkno = SV.sevkNo LEFT OUTER JOIN tblTop AS T WITH(NOLOCK) INNER JOIN tblDispo AS HD WITH(NOLOCK) ON T.dispoNo = HD.dispoNo AND T.dispoTuruId = HD.dispoTuruId ON SVD.dispoTuruId = T.dispoTuruId AND SVD.dispoNo = T.dispoNo AND SVD.topNo = T.topNo AND MD.siparisDetayId = HD.siparisDetayId ON DT.dispoTuruId = MD.dispoTuruId AND DT.dispoNo = MD.dispoNo LEFT OUTER JOIN tblDispoTerbiyeTest AS TS WITH(NOLOCK) ON DT.dispoTuruId = TS.dispoTuruId AND DT.dispoNo = TS.dispoNo --WHERE DT.gelisTarihi IS NULL -- OR DT.gelisTarihi > GETDATE()-30 GROUP BY MD.dispoNo, DT.partiNo, DT.iplikBoyaId, TS.mamulEn, TS.mamulGramaj, DT.gelisMetresi, DT.gelisTarihi, DT.atkiCekmesi, DT.cozguCekmesi, DT.fire, DT.fiyat, DT.aciklama, DT.sqlUserName, DT.kayitTarihi, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SD.siparisId, SD.siparisDetayId, B.tanimAd, M.musteriAdi, M.musteriAdi, M1.musteriAdi, O.orguAd, TP.iplikAciklama, SD.miktar, MD.dispoNotu, SP.mamulTermin, DT.dovizId, DV.dovizCins, MD.dispoMetresi, MD.akisNotu, SV.sevkNo, SV.sevkTarihi, DT.prosesOk,SP.siparisId )

    Read the article

  • Simple Self Join Query Bad Performance

    - by user1514042
    Could anyone advice on how do I improve the performance of the following query. Note, the problem seems to be caused by where clause. Data (table contains a huge set of rows - 500K+, the set of parameters it's called with assums the return of 2-5K records per query, which takes 8-10 minutes currently): USE [SomeDb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Data]( [x] [money] NOT NULL, [y] [money] NOT NULL, CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED ( [x] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO The Query select top 10000 s.x as sx, e.x as ex, s.y as sy, e.y as ey, e.y - s.y as y_delta, e.x - s.x as x_delta from Data s inner join Data e on e.x > s.x and e.x - s.x between xFrom and xTo --where e.y - s.y > @yDelta -- when uncommented causes a huge delay Update 1 - Execution Plan <?xml version="1.0" encoding="utf-16"?> <ShowPlanXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Version="1.2" Build="11.0.2100.60" xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"> <BatchSequence> <Batch> <Statements> <StmtSimple StatementCompId="1" StatementEstRows="100" StatementId="1" StatementOptmLevel="FULL" StatementOptmEarlyAbortReason="GoodEnoughPlanFound" StatementSubTreeCost="0.0263655" StatementText="select top 100&#xD;&#xA;s.x as sx,&#xD;&#xA;e.x as ex,&#xD;&#xA;s.y as sy,&#xD;&#xA;e.y as ey,&#xD;&#xA;e.y - s.y as y_delta,&#xD;&#xA;e.x - s.x as x_delta&#xD;&#xA;from Data s &#xD;&#xA; inner join Data e&#xD;&#xA; on e.x &gt; s.x and e.x - s.x between 100 and 105&#xD;&#xA;where e.y - s.y &gt; 0.01&#xD;&#xA;" StatementType="SELECT" QueryHash="0xAAAC02AC2D78CB56" QueryPlanHash="0x747994153CB2D637" RetrievedFromCache="true"> <StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" /> <QueryPlan DegreeOfParallelism="0" NonParallelPlanReason="NoParallelPlansInDesktopOrExpressEdition" CachedPlanSize="24" CompileTime="13" CompileCPU="13" CompileMemory="424"> <MemoryGrantInfo SerialRequiredMemory="0" SerialDesiredMemory="0" /> <OptimizerHardwareDependentProperties EstimatedAvailableMemoryGrant="52199" EstimatedPagesCached="14561" EstimatedAvailableDegreeOfParallelism="4" /> <RelOp AvgRowSize="55" EstimateCPU="1E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Compute Scalar" NodeId="0" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="0.0263655"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> <ColumnReference Column="Expr1004" /> <ColumnReference Column="Expr1005" /> </OutputList> <ComputeScalar> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1004" /> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[y] as [e].[y]-[SomeDb].[dbo].[Data].[y] as [s].[y]"> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> </DefinedValue> <DefinedValue> <ColumnReference Column="Expr1005" /> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x]"> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> </DefinedValue> </DefinedValues> <RelOp AvgRowSize="39" EstimateCPU="1E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Top" NodeId="1" Parallel="false" PhysicalOp="Top" EstimatedTotalSubtreeCost="0.0263555"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="1" ActualExecutions="1" /> </RunTimeInformation> <Top RowCount="false" IsPercent="false" WithTies="false"> <TopExpression> <ScalarOperator ScalarString="(100)"> <Const ConstValue="(100)" /> </ScalarOperator> </TopExpression> <RelOp AvgRowSize="39" EstimateCPU="151828" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Inner Join" NodeId="2" Parallel="false" PhysicalOp="Nested Loops" EstimatedTotalSubtreeCost="0.0263455"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="0" ActualExecutions="1" /> </RunTimeInformation> <NestedLoops Optimized="false"> <OuterReferences> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OuterReferences> <RelOp AvgRowSize="23" EstimateCPU="1.80448" EstimateIO="3.76461" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="1" LogicalOp="Clustered Index Scan" NodeId="3" Parallel="false" PhysicalOp="Clustered Index Scan" EstimatedTotalSubtreeCost="0.0032831" TableCardinality="1640290"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="15225" ActualEndOfScans="0" ActualExecutions="1" /> </RunTimeInformation> <IndexScan Ordered="false" ForcedIndex="false" ForceScan="false" NoExpandHint="false"> <DefinedValues> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </DefinedValue> </DefinedValues> <Object Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Index="[PK_Data]" Alias="[e]" IndexKind="Clustered" /> </IndexScan> </RelOp> <RelOp AvgRowSize="23" EstimateCPU="0.902317" EstimateIO="1.88387" EstimateRebinds="1" EstimateRewinds="0" EstimatedExecutionMode="Row" EstimateRows="100" LogicalOp="Clustered Index Seek" NodeId="4" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="0.0263655" TableCardinality="1640290"> <OutputList> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </OutputList> <RunTimeInformation> <RunTimeCountersPerThread Thread="0" ActualRows="100" ActualEndOfScans="15224" ActualExecutions="15225" /> </RunTimeInformation> <IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false" Storage="RowStore"> <DefinedValues> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </DefinedValue> <DefinedValue> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </DefinedValue> </DefinedValues> <Object Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Index="[PK_Data]" Alias="[s]" IndexKind="Clustered" /> <SeekPredicates> <SeekPredicateNew> <SeekKeys> <EndRange ScanType="LT"> <RangeColumns> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </RangeColumns> <RangeExpressions> <ScalarOperator ScalarString="[SomeDb].[dbo].[Data].[x] as [e].[x]"> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> </RangeExpressions> </EndRange> </SeekKeys> </SeekPredicateNew> </SeekPredicates> <Predicate> <ScalarOperator ScalarString="([SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x])&gt;=($100.0000) AND ([SomeDb].[dbo].[Data].[x] as [e].[x]-[SomeDb].[dbo].[Data].[x] as [s].[x])&lt;=($105.0000) AND ([SomeDb].[dbo].[Data].[y] as [e].[y]-[SomeDb].[dbo].[Data].[y] as [s].[y])&gt;(0.01)"> <Logical Operation="AND"> <ScalarOperator> <Compare CompareOp="GE"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="($100.0000)" /> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="LE"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="x" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="x" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="($105.0000)" /> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="GT"> <ScalarOperator> <Arithmetic Operation="SUB"> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[e]" Column="y" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[SomeDb]" Schema="[dbo]" Table="[Data]" Alias="[s]" Column="y" /> </Identifier> </ScalarOperator> </Arithmetic> </ScalarOperator> <ScalarOperator> <Const ConstValue="(0.01)" /> </ScalarOperator> </Compare> </ScalarOperator> </Logical> </ScalarOperator> </Predicate> </IndexScan> </RelOp> </NestedLoops> </RelOp> </Top> </RelOp> </ComputeScalar> </RelOp> </QueryPlan> </StmtSimple> </Statements> </Batch> </BatchSequence> </ShowPlanXML>

    Read the article

  • How bad is opening and closing a SQL connection for several times? What is the exact effect?

    - by Eren
    For example, I need to fill lots of DataTables with SQLDataAdapter's Fill() method: DataAdapter1.Fill(DataTable1); DataAdapter2.Fill(DataTable2); DataAdapter3.Fill(DataTable3); DataAdapter4.Fill(DataTable4); DataAdapter5.Fill(DataTable5); .... .... Even all the dataadapter objects use the same SQLConnection, each Fill method will open and close the connection unless the connection state is already open before the method call. What I want to know is how does unnecessarily opening and closing SQLConnections affect the performance of the application. How much does it need to scale to see the bad effects of this problem (100,000s of concurrent users?). In a mid-size website (daily 50000 users) does it worth bothering and searching for all the Fill() calls, keeping them together in the code and opening the connection before any Fill() call and closing afterwards?

    Read the article

  • SQL running slow when i executes thru java

    - by user270885
    I am trying to execute a query in oracle db. When i try to run thru SQLTools the query is executing in 2 seconds and when i run the same query thru JAVA it is exectuting in more than a minute. I am using a hint /*+ordered use_nl (a, b)*/ I am using ojdbc6.jar Is it because of any JARS? Please help whats causing this?

    Read the article

  • SQL Server 2012 Service Pack 2 Cumulative Update #1 is available!

    - by AaronBertrand
    The SQL Server team has released SQL Server 2012 SP2 Cumulative Update #1. This cumulative updates Service Pack 2 to include the fixes from SP1 CU#10 and a few from CU#11, including the fix for the online index rebuild corruption issue I discussed recently on SQLPerformance.com . It also marks the first time in the SQL Server 2012 timeframe that both cumulative update branches are on roughly the same schedule, which makes many of us happy I'm sure. :-) KB Article: KB #2976982 Build # is 11.0.5532...(read more)

    Read the article

  • SQL Server 2012 Service Pack 1 Cumulative Update #11 is available!

    - by AaronBertrand
    The SQL Server team has released SQL Server 2012 SP1 Cumulative Update #11. This cumulative update includes a fix for the online index rebuild corruption issue I discussed recently on SQLPerformance.com . KB Article: KB #2975396 Build # is 11.0.3449 Currently there are 32 public fixes listed (32 total) Relevant for builds 11.0.3000 -> 11.0.3448. Do not attempt to install on SQL Server 2012 RTM (any build < 11.0.3000) or any other major version. If you are on a different branch, see this blog...(read more)

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

1