Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

Posted by James Rogers on Geeks with Blogs See other posts from Geeks with Blogs or by James Rogers
Published on Mon, 20 Dec 2010 17:51:05 GMT Indexed on 2010/12/21 5:01 UTC
Read the original article Hit count: 481

Filed under:

 

Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.
 
Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.
 
Pros: 
  1. Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube.
  2. User access can continue while this ProcessIndexes in underway.
 
Cons:
  1. Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations.
  2. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building
 
<Batch http://schemas.microsoft.com/analysisservices/2003/engine">http://schemas.microsoft.com/analysisservices/2003/engine">
 <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
      <Object>
        <DatabaseID>MyDatabase</DatabaseID>
        <CubeID>MyCube</CubeID>
      </Object>
      <Type>ProcessIndexes</Type>
      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
 </Parallel>
</Batch>
 
The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.
 
Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.
 
Things that did not work:
  1. ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point.
  2. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.
 
NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

© Geeks with Blogs or respective owner