Search Results

Search found 174 results on 7 pages for 'lob'.

Page 1/7 | 1 2 3 4 5 6 7  | Next Page >

  • Reducing Oracle LOB Memory Use in PHP, or Paul's Lesson Applied to Oracle

    - by christopher.jones
    Paul Reinheimer's PHP memory pro tip shows how re-assigning a value to a variable doesn't release the original value until the new data is ready. With large data lengths, this unnecessarily increases the peak memory usage of the application. In Oracle you might come across this situation when dealing with LOBS. Here's an example that selects an entire LOB into PHP's memory. I see this being done all the time, not that that is an excuse to code in this style. The alternative is to remove OCI_RETURN_LOBS to return a LOB locator which can be accessed chunkwise with LOB->read(). In this memory usage example, I threw some CLOB rows into a table. Each CLOB was about 1.5M. The fetching code looked like: $s = oci_parse ($c, 'SELECT CLOBDATA FROM CTAB'); oci_execute($s); echo "Start Current :" . memory_get_usage() . "\n"; echo "Start Peak : " .memory_get_peak_usage() . "\n"; while(($r = oci_fetch_array($s, OCI_RETURN_LOBS)) !== false) { echo "Current :" . memory_get_usage() . "\n"; echo "Peak : " . memory_get_peak_usage() . "\n"; // var_dump(substr($r['CLOBDATA'],0,10)); // do something with the LOB // unset($r); } echo "End Current :" . memory_get_usage() . "\n"; echo "End Peak : " . memory_get_peak_usage() . "\n"; Without "unset" in loop, $r retains the current data value while new data is fetched: Start Current : 345300 Start Peak : 353676 Current : 1908092 Peak : 2958720 Current : 1908092 Peak : 4520972 End Current : 345668 End Peak : 4520972 When I uncommented the "unset" line in the loop, PHP's peak memory usage is much lower: Start Current : 345376 Start Peak : 353676 Current : 1908168 Peak : 2958796 Current : 1908168 Peak : 2959108 End Current : 345744 End Peak : 2959108 Even if you are using LOB->read(), unsetting variables in this manner will reduce the PHP program's peak memory usage. With LOBS in Oracle DB there is also DB memory use to consider. Using LOB->free() is worthwhile for locators. Importantly, the OCI8 1.4.1 extension (from PECL or included in PHP 5.3.2) has a LOB fix to free up Oracle's locators earlier. For long running scripts using lots of LOBS, upgrading to OCI8 1.4.1 is recommended.

    Read the article

  • T-SQL Tuesday #006: LOB Data

    - by Adam Machanic
    Just a quick note for those of you who may not have seen Michael Coles's post (and a reminder for the rest of you): The topic of this month's T-SQL Tuesday is LOB data . Get your posts ready; next Tuesday we go big! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Need some help understanding IO Statistics

    - by Abe Miessler
    I have a query that has a very costly INDEX SEEK operation in the execution plan. In order to track down the cause i set IO STATISTICS on and ran it. In the problem section it gave the following statistics: Table '#TempStudents_Enrollment2_____________________________________000000004D5F'. Scan count 0, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TempRace2______________________________________________000000004D58'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefRace'. Scan count 120, logical reads 240, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefFedEnctyRaceCatg'. Scan count 18, logical reads 36, physical reads 2, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#43B0BA0F'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#42BC95D6'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#41C8719D'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#40D44D64'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LEA2_________________________________________________000000004D56'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#39332B9C'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#School2________________________________________________000000004D57'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GenderKey______________________________________________000000004D5A'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LangAcqKey_____________________________________________000000004D5B'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TransferCatKey___________________________________________000000004D5C'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#ResCatKey______________________________________________000000004D5D'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuPgm_Denorm'. Scan count 2344954, logical reads 4992518, physical reads 16, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#3FE0292B'. Scan count 1, logical reads 2344954, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuEnrlmt_Denorm'. Scan count 20, logical reads 87679, physical reads 0, read-ahead reads 87425, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GradeKey_______________________________________________000000004D59'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. What should I look for in here when i'm looking to improve the performance? The line with over 2 million for the Scan count looked suspicious to me but I really don't know. Does anyone see anything here that i should look into in more detail?

    Read the article

  • LOB Pointer Indexing Proposal

    - by jchang
    My observations are that IO to lob pages (and row overflow pages as well?) is restricted to synchronous IO, which can result in serious problems when these reside on disk drive storage. Even if the storage system is comprised of hundreds of HDDs, the realizable IO performance to lob pages is that of a single disk, with some improvement in parallel execution plans. The reason for this appears to be that each thread must work its way through a page to find the lob pointer information, and then generates...(read more)

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Solving Slow Query

    - by Chris
    We are installing a new forum (yaf) for our site. One of the stored procedures is extremely slow - in fact it always times out in the browser. If I run it in MSSMS it takes nearly 10 minutes to complete. Is there a way to find out what part of this query if taking so long? The Query: DECLARE @BoardID int DECLARE @UserID int DECLARE @CategoryID int = null DECLARE @ParentID int = null SET @BoardID = 1 SET @UserID = 2 select a.CategoryID, Category = a.Name, ForumID = b.ForumID, Forum = b.Name, Description, Topics = [dbo].[yaf_forum_topics](b.ForumID), Posts = [dbo].[yaf_forum_posts](b.ForumID), Subforums = [dbo].[yaf_forum_subforums](b.ForumID, @UserID), LastPosted = t.LastPosted, LastMessageID = t.LastMessageID, LastUserID = t.LastUserID, LastUser = IsNull(t.LastUserName,(select Name from [dbo].[yaf_User] x where x.UserID=t.LastUserID)), LastTopicID = t.TopicID, LastTopicName = t.Topic, b.Flags, Viewing = (select count(1) from [dbo].[yaf_Active] x JOIN [dbo].[yaf_User] usr ON x.UserID = usr.UserID where x.ForumID=b.ForumID AND usr.IsActiveExcluded = 0), b.RemoteURL, x.ReadAccess from [dbo].[yaf_Category] a join [dbo].[yaf_Forum] b on b.CategoryID=a.CategoryID join [dbo].[yaf_vaccess] x on x.ForumID=b.ForumID left outer join [dbo].[yaf_Topic] t ON t.TopicID = [dbo].[yaf_forum_lasttopic](b.ForumID,@UserID,b.LastTopicID,b.LastPosted) where a.BoardID = @BoardID and ((b.Flags & 2)=0 or x.ReadAccess<>0) and (@CategoryID is null or a.CategoryID=@CategoryID) and ((@ParentID is null and b.ParentID is null) or b.ParentID=@ParentID) and x.UserID = @UserID order by a.SortOrder, b.SortOrder IO Statistics: Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Active'. Scan count 14, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_User'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Topic'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Category'. Scan count 0, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Forum'. Scan count 0, logical reads 488, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserGroup'. Scan count 231, logical reads 693, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_ForumAccess'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_AccessMask'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserForum'. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Client Statistics: Client Execution Time 11:54:01 Query Profile Statistics Number of INSERT, DELETE and UPDATE statements 0 0.0000 Rows affected by INSERT, DELETE, or UPDATE statements 0 0.0000 Number of SELECT statements 8 8.0000 Rows returned by SELECT statements 19 19.0000 Number of transactions 0 0.0000 Network Statistics Number of server roundtrips 3 3.0000 TDS packets sent from client 3 3.0000 TDS packets received from server 34 34.0000 Bytes sent from client 3166 3166.0000 Bytes received from server 128802 128802.0000 Time Statistics Client processing time 156478 156478.0000 Total execution time 572009 572009.0000 Wait time on server replies 415531 415531.0000 Execution Plan

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • SQL Server 2005: reclaiming LOB space

    - by AndrewD
    Hello all, I've got an interesting table in one of my DBs that's confusing me. The table in question has a a few LOB type columns (two nvarchar(max) and a text) and it looks like there's some strange space issues going on. from this query: SELECT type_desc, SUM(total_pages) *8 [Size in kb] FROM sys.partitions p JOIN sys.allocation_units a ON p.partition_id = a.container_id WHERE p.object_id = OBJECT_ID('asyncoperationbase') GROUP BY type_desc; I get: type_desc Size in kb IN_ROW_DATA 27936 LOB_DATA 1198144 ROW_OVERFLOW_DATA 0 (there's just under 8000 rows in the table, each row has a data length of ~10k - not counting the LOB data) here's where it gets somewhat interesting: SELECT ( SUM(DATALENGTH(aob.WorkflowState)) + SUM(DATALENGTH(aob.[Message]))+ SUM(DATALENGTH(aob.[Data])) ) / 1024 FROM AsyncOperationBase aob returns: 76617 As I'm reading it - it looks like the ~75mb of LOB data is using over a gig of space to be stored - I would expect some overhead but not quit that much. Thanks, Andrew

    Read the article

  • T-SQL Tuesday #006: LOB, row-overflow and locking behavior

    - by Michael Zilberstein
    This post is my contribution to T-SQL Tuesday #006 , hosted this time by Michael Coles . Actually this post was born last Thursday when I attended Kalen Delaney's "Deep dive into SQL Server Internals" seminar in Tel-Aviv. I asked question, Kalen didn't have answer at hand, so during a break I created demo in order to check certain behavior. Demo goes later in this post but first small teaser. I have MyTable table with 10 rows. I take 2 rows that reside on different pages. In first session...(read more)

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Hibernate : Disabling contextual LOB creation as createClob() method threw error

    - by Giri Byaks
    Hi, I am using using hibernate 3.5.6 with Oracle 10g. I am seeing the below exception during initialization but the application itself is working fine. What is the cause for this exception? and how it can be corrected? Exception Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException Info Oracle version: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 JDBC driver: Oracle JDBC driver, version: 11.1.0.7.0 Thanks, Girish

    Read the article

  • MVC LOB application

    - by João Passos
    Hi, I'm new to web development and i'm starting with a MVC project. I have a view to create a new Service. In this view, i need to have a button to show a dialog with client names (i also would like to implement filters and paging in this dialog). Once the user selects a client from the dialog, i need to populate some combo boxes in the Service View with info relative to that particular client. How can i accomplish this? If there any demo code or tutorial i can get my hands on to learn this? Thanks in advance for any tip.

    Read the article

  • PERSISTENCE LOB WITH JPA

    - by rfders
    Hi, Folks. I have this problem using JBOSS, EJB3 and MSSQL Express . i'm trying to persist an entity with an Blob attribute which was converted into a byte[], apparently it create the entity perfectly, no error was logged at jboss's console. but when i check into database the column is null. i already checked the Object to byte[] method and it is fine. i tried assigning Image and varbinary(Max) data type to this column. so i really don't know what it happening, why the column is not inserted into database. thanks a lot.

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • Working with Analytic Workflow Manager (AWM) - Part 8 Cube Metadata Analysis

    - by Mohan Ramanuja
    CUBE SIZEselect dbal.owner||'.'||substr(dbal.table_name,4) awname, sum(dbas.bytes)/1024/1024 as mb, dbas.tablespace_name from dba_lobs dbal, dba_segments dbas where dbal.column_name = 'AWLOB' and dbal.segment_name = dbas.segment_name group by dbal.owner, dbal.table_name, dbas.tablespace_name order by dbal.owner, dbal.table_name SESSION RESOURCES select vses.username||':'||vsst.sid username, vstt.name, max(vsst.value) valuefrom v$sesstat vsst, v$statname vstt, v$session vseswhere vstt.statistic# = vsst.statistic# and vsst.sid = vses.sid andVSES.USERNAME LIKE ('ATTRIBDW_OWN') ANDvstt.name in ('session pga memory', 'session pga memory max', 'session uga memory','session uga memory max', 'session cursor cache count', 'session cursor cache hits', 'session stored procedure space', 'opened cursors current', 'opened cursors cumulative') andvses.username is not null group by vsst.sid, vses.username, vstt.name order by vsst.sid, vses.username, vstt.name OLAP PGA USE select 'OLAP Pages Occupying: '|| round((((select sum(nvl(pool_size,1)) from v$aw_calc)) / (select value from v$pgastat where name = 'total PGA inuse')),2)*100||'%' info from dual union select 'Total PGA Inuse Size: '||value/1024||' KB' info from v$pgastat where name = 'total PGA inuse' union select 'Total OLAP Page Size: '|| round(sum(nvl(pool_size,1))/1024,0)||' KB' info from v$aw_calc order by info desc OLAP PGA USAGE PER USER select vs.username, vs.sid, round(pga_used_mem/1024/1024,2)||' MB' pga_used, round(pga_max_mem/1024/1024,2)||' MB' pga_max, round(pool_size/1024/1024,2)||' MB' olap_pp, round(100*(pool_hits-pool_misses)/pool_hits,2) || '%' olap_ratio from v$process vp, v$session vs, v$aw_calc va where session_id=vs.sid and addr = paddr CUBE LOADING SCRIPT REM The 'set define off' statement is needed only if running this script through SQLPlus.REM If you are using another tool to run this script, the line below may be commented out.set define offBEGIN  DBMS_CUBE.BUILD(    'VALIDATE  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/BEGIN  DBMS_CUBE.BUILD(    '  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/ VISUALIZATION OBJECT - AW$ATTRIBDW_OWN  CREATE TABLE "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"        (            "PS#"    NUMBER(10,0),            "GEN#"   NUMBER(10,0),            "EXTNUM" NUMBER(8,0),            "AWLOB" BLOB,            "OBJNAME"  VARCHAR2(256 BYTE),            "PARTNAME" VARCHAR2(256 BYTE)        )        PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE        (            BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "AWLOB"        )        STORE AS SECUREFILE        (            TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        PARTITION BY RANGE        (            "GEN#"        )        SUBPARTITION BY HASH        (            "PS#",            "EXTNUM"        )        SUBPARTITIONS 8        (            PARTITION "PTN1" VALUES LESS THAN (1) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE READS LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP661" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP662" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP663" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP664" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP665" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP666" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP667" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP668" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" ) ,            PARTITION "PTNN" VALUES LESS THAN (MAXVALUE) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP669" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP670" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP671" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP672" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP673" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP674" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP675" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP676" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" )        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."ATTRIBDW_OWN_I$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        "PS#", "GEN#", "EXTNUM"    )    PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE    (        INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT    )    TABLESPACE "ATTRIBDW_DATA" ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000406980C00004$$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOCAL (PARTITION "SYS_IL_P711" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP695" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP696" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP697" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP698" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP699" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP700" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP701" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP702" TABLESPACE "ATTRIBDW_DATA" ) , PARTITION "SYS_IL_P712" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP703" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP704" TABLESPACE        "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP705" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP706" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP707" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP708" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP709" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP710" TABLESPACE "ATTRIBDW_DATA" ) ) PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE BUILD LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_BUILD_LOG"        (            "BUILD_ID"          NUMBER,            "SLAVE_NUMBER"      NUMBER,            "STATUS"            VARCHAR2(10 BYTE),            "COMMAND"           VARCHAR2(25 BYTE),            "BUILD_OBJECT"      VARCHAR2(30 BYTE),            "BUILD_OBJECT_TYPE" VARCHAR2(10 BYTE),            "OUTPUT" CLOB,            "AW"            VARCHAR2(30 BYTE),            "OWNER"         VARCHAR2(30 BYTE),            "PARTITION"     VARCHAR2(50 BYTE),            "SCHEDULER_JOB" VARCHAR2(100 BYTE),            "TIME" TIMESTAMP (6)WITH TIME ZONE,        "BUILD_SCRIPT" CLOB,        "BUILD_TYPE"            VARCHAR2(22 BYTE),        "COMMAND_DEPTH"         NUMBER(2,0),        "BUILD_SUB_OBJECT"      VARCHAR2(30 BYTE),        "REFRESH_METHOD"        VARCHAR2(1 BYTE),        "SEQ_NUMBER"            NUMBER,        "COMMAND_NUMBER"        NUMBER,        "IN_BRANCH"             NUMBER(1,0),        "COMMAND_STATUS_NUMBER" NUMBER,        "BUILD_NAME"            VARCHAR2(100 BYTE)        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "OUTPUT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        LOB        (            "BUILD_SCRIPT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00013$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00007$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG" ( PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE DIMENSION COMPILE  CREATE TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"        (            "ID"               NUMBER,            "SEQ_NUMBER"       NUMBER,            "ERROR#"           NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE"    VARCHAR2(2000 BYTE),            "DIMENSION"        VARCHAR2(100 BYTE),            "DIMENSION_MEMBER" VARCHAR2(100 BYTE),            "MEMBER_ANCESTOR"  VARCHAR2(100 BYTE),            "HIERARCHY1"       VARCHAR2(100 BYTE),            "HIERARCHY2"       VARCHAR2(100 BYTE),            "ERROR_CONTEXT" CLOB        )        SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "ATTRIBDW_DATA" LOB        (            "ERROR_CONTEXT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION"IS    'Name of dimension being compiled';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION_MEMBER"IS    'Problem dimension member';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."MEMBER_ANCESTOR"IS    'Problem dimension member''s parent';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY1"IS    'First hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY2"IS    'Second hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_CONTEXT"IS    'Extra information for error';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"IS    'Cube dimension compile log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407307C00010$$" ON "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE( INITIAL 1048576 NEXT 1048576 MAXEXTENTS 2147483645) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE OPERATING LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"        (            "INST_ID"    NUMBER NOT NULL ENABLE,            "SID"        NUMBER NOT NULL ENABLE,            "SERIAL#"    NUMBER NOT NULL ENABLE,            "USER#"      NUMBER NOT NULL ENABLE,            "SQL_ID"     VARCHAR2(13 BYTE),            "JOB"        NUMBER,            "ID"         NUMBER,            "PARENT_ID"  NUMBER,            "SEQ_NUMBER" NUMBER,            "TIME" TIMESTAMP (6)WITH TIME ZONE NOT NULL ENABLE,        "LOG_LEVEL"    NUMBER(4,0) NOT NULL ENABLE,        "DEPTH"        NUMBER(4,0),        "OPERATION"    VARCHAR2(15 BYTE) NOT NULL ENABLE,        "SUBOPERATION" VARCHAR2(20 BYTE),        "STATUS"       VARCHAR2(10 BYTE) NOT NULL ENABLE,        "NAME"         VARCHAR2(20 BYTE) NOT NULL ENABLE,        "VALUE"        VARCHAR2(4000 BYTE),        "DETAILS" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "DETAILS"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."INST_ID"IS    'Instance ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SID"IS    'Session ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SERIAL#"IS    'Session serial#';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."USER#"IS    'User ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SQL_ID"IS    'Executing SQL statement ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."JOB"IS    'Identifier of job';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."PARENT_ID"IS    'Parent operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."TIME"IS    'Time of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."LOG_LEVEL"IS    'Verbosity level of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DEPTH"IS    'Nesting depth of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."OPERATION"IS    'Current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SUBOPERATION"IS    'Current suboperation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."STATUS"IS    'Status of current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."NAME"IS    'Name of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."VALUE"IS    'Value of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DETAILS"IS    'Extra information for record';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"IS    'Cube operations log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407301C00018$$" ON "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE REJECTED RECORDS CREATE TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"        (            "ID"            NUMBER,            "SEQ_NUMBER"    NUMBER,            "ERROR#"        NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE" VARCHAR2(2000 BYTE),            "RECORD#"       NUMBER(38,0),            "SOURCE_ROW" ROWID,            "REJECTED_RECORD" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "REJECTED_RECORD"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."RECORD#"IS    'Rejected record number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SOURCE_ROW"IS    'Rejected record''s ROWID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."REJECTED_RECORD"IS    'Rejected record copy';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"IS    'Cube rejected records log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407304C00007$$" ON "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    Read the article

  • Examples of datecentric LOB-application in Silverlight with creative design?

    - by Alexander Galkin
    I am a database developer and I have rather limited experience with interface design. I am currently working on a project in my free-time a freetime which is mostly data centric and I would like to develop an eye-catchy interface for it using Silverlight. What I am looking for are examples of nice and interesting LOB applications in Siverlight without the use of paid frameworks. So far, I could only find something like Telerik sample application, but it uses a lot of Telerik controls I can't afford to buy.

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • First Windows Phone 7 &ndash; Mobile LOB App.

    - by Richard Jones
    So I spent a couple of hours yesterday building my first Windows 7 Phone Series. application.   I still can’t get used to saying Windows 7 Phone Series.   So I think I’ll just go with WP7.   I must say I’m really impressed.    Calling web-services a breeze laying out User Interface a very straight forward.   I had made called into Dynamics NAV using web-services in under 10 minutes. Working in XAML takes a bit of getting used to,  I’m not trying to-do anything too clever yet. One thing I will point out to transition from one XAML page to the next use this.NavigationService.Navigate(new Uri("/Primary.xaml", UriKind.Relative)); Going from Compact Framework this is equivalent of a Window.Show Its so nice to be able to talk nicely again, about Windows Based Mobile Development!

    Read the article

  • Is There A Need For End-To-End ExtJS to Microsoft Server (MVC-C#, LOB) 4 Day Class? (Poll Enclosed)

    Over the past couple years, the focus of the web development Ive been doing involves building highly flexible, highly scalable and straight forward web sites to implement and maintain Line of... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is There A Need For End-To-End ExtJS to Microsoft Server (MVC-C#, LOB) 4 Day Class? (Poll Enclosed)

    Over the past couple years, the focus of the web development Ive been doing involves building highly flexible, highly scalable and straight forward web sites to implement and maintain Line of... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

1 2 3 4 5 6 7  | Next Page >