Search Results

Search found 119 results on 5 pages for 'mdx'.

Page 3/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • An OLAP client!

    - by Davide Mauri
    While surfing CodePlex I’ve come across a very interesting tool for all BI Developers who misses a decent OLAP client where to write, run & test MDX queries http://ranetuilibraryolap.codeplex.com/ I’ve not tested it yet, but I’ll surely do this week and I’ll post my impressions ASAP. The first impression, just looking the CodePlex page, is that tool Rocks!!!!! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Any way to edit Warcraft MDX or MDL Animated models?

    - by Aralox
    I have been searching for a while for a way to get an animated mdl or mdx model into any 3D animating software (such as Blender), but so far have not had any success. I found a few methods of getting textured static mdx or mdl models into Blender/Milkshape/Hexagon, but no one seems to have written an importer that deals with the mdl/mdx model's keyframe animation. On that note, if anyone knows of a way of importing a keyframe-animated 3DS model into Blender, me and alot of people would appreciate it if you could let us know. Thanks for any help! :) PS: For anyone curious about static MDL or MDX - Blender, see here: http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/WarCraft_MDL

    Read the article

  • Why creating a new MDX language instead of extending SQL?

    - by DReispt
    I have a long experience with SQL, but recently began working with datawarehouse and OLAP technologies: building fact and dimension tables, that then are queried using MDX (MultiDimensional eXpressions). The problem is that MDX works with a completely different logic compared to SQL, and it's a whole new learning curve even for someone with a strong SQL background. Yes, MDX allows you to do things that would be hard or almost impossible with plain SQL. But sometimes it's frustrating to be hours around an MDX to do something you know you could achieve in minutes using SQL (ok, you can tell me to RTFM ...). But why go on to the trouble of creating a new completely different language when you could build on SQL, extend it to add the features needed by OLAP applications?

    Read the article

  • C# MDX RenderToSurface, where to reset after device is lost?

    - by Moritz Schöfl
    Hi, I got a problem with the RenderToSurface class. When I resize the Form of my Device, the Draw method is still called, but doesnt throw an Exception, it looks like this: device.Clear(ClearFlags.Target, Color.Red, 0, 0); device.BeginScene(); // here is out commented code device.EndScene(); device.Present(); In another method, I wrote this: renderToSurface.BeginScene(surfaces[currentIndex]); // here is out commented code renderToSurface.EndScene(Filter.None); and this method seems to throw a nullpointer exception when I resize the window; So my question is: - where to reset / restore / handle the renderToSurface class? (i tried it with the DeviceReset event like following - void OnDeviceReset(object sender, EventArgs e) { renderToSurface = new RenderToSurface(Game.Device, Game.ClientSize.Width, Game.ClientSize.Height, Format.A8R8G8B8, true, DepthFormat.D16); } )

    Read the article

  • MDX , Calculate Number of days when the cummulative sum of Revenues from end of a month date match with the given debt amount.

    - by Shuchi
    Hi, I have a financial cube and i have to calculate Daily Sales Outstanding as : Number of Days between the selected month last date and the earliest transaction date when cummulative sum of Revenue from last date of the month till the date where sum revenue <= the debt amount for the date . e.g On 31/12/2009 my debt amount = 2,500,000 31-Dec-09 30-Nov-09 15-Oct-09 31-Oct-09 Revenue 1,000,000 1,000,000 500,000 1,0000 Cummulative sum of revenue 1,000,000 2,00,000 2,500,000 4,000,000 No of Days 31 30 16 On 15/Oct/09 cummulative revenue is 2,500,000 which equals my debt amount on that day Count of Days = 31 + 31 + 16 = 76 Days. In other words Sum Revenue from the selected date backwards until sum total equals or exeeds the total to date balance of the debtors. Any help will be highly appreciated . If i haven't explained clearly enough or if you need more information then please let me know. Thanks in advance . Shuchi.

    Read the article

  • BonaVista Dimensions used as a report service

    - by Marco Russo (SQLBI)
    Recently I have seen a long demo of BonaVista Dimensions . It is a product that is able to create reports and, most important dashboards. You can use it also without SQL Server and Analysis Services, just by importing data in a local cube file that you can model using its own simple to use user interface. But what is interesting to me (in this post) is the capability to connect to a SSAS cube. It is somewhat similar to XLCubed and in reality these two products have something in common, because both...(read more)

    Read the article

  • SSAS OLAP MDX and relationships

    - by Sonic Soul
    I new to OLAP, and still not sure how to create a relationship between 2 or more entities. I am basing my cube on views. For simplicity sake let's call them like this: viewParent (ParentID PK) viewChild (ChildID PK, ParentID FK) these views have more fields, but they're not important for this question. in my data source, i defined a relationship between viewParent and viewChild using ParentID for the link. As for measures, i was forced to create separate measures for Parent and Child. in my MDX query however, the relationship does not seem to be enforced. If i select record count for parent, child, and add some filters for the parent, the child count is not reflecting it.. SELECT { [Measures].[ParentCount],[Measures].[ChildCount] } ON COLUMNS FROM [Cube] WHERE { ( {[Time].[Month].&[2011-06-01T00:00:00]} ,{[SomeDimension].&[Foo]} ) } the selected ParentCount is correct, but ChildCount is not affected by any of the filters (because they are parent filters). However, since i defined a relationship, how can i take advantage of that to filter children by parent filter?

    Read the article

  • Change column names of a cube action as they appear in Visual Studio

    - by hermann
    the title pretty much says it all. I have a cube with data in it and I have yet to find a way to change the column names. They appear in a very ugly manner like [cubeName].[$dimension.columnName]. I have tried everything I know and anything I found on the web but nothing seems to be working. What I tried to do in most cases is create an Action in the Actions tab and write some MDX query language in there. No results whatsoever. As if the action is never run. Does anyone know how to do this? I've spent about 3 days trying to figure this out. Thank you.

    Read the article

  • How to create following cube?

    - by Itsgkiran
    Hi! For example........... Database Table: BatchID BatchName Chemical Value -------------------------------------------------------- BI-1 BN-1 CH-1 1 BI-2 BN-2 CH-2 2 -------------------------------------------------------- I need to display following cube BI-1 BI-2 BN-1 BN-2 ----------------------------------------- CH-1 1 null ------------------------------------------ CH-2 null 2 ------------------------------------------ Here BI-1,BN-1 are two rows in a single columns i need to display chemical value as row of that. What is query MDX query for this. Could Please help me to solve this problem. Thank You.

    Read the article

  • Preserving Language across inline Calculated Members in SSAS

    - by Tullo
    Problem: I need to retrieve the language of a given cell from the cube. The cell is defined by code-generated MDX, which can have an arbitrary level of indirection as far as calculated members and sets go (defined in the WITH clause). SSAS appears to ignore the Language of the specified members when you declare a calculated member inline in the query. Example: The cube's default locale is 1033 (en-US) The cube contains a Calculated Measure called [Net Pounds] which is defined as [Net Amt], language=2057 (en-GB) The query requests this measure alongside an inline calculated measure which is simply an alias to the [Net Pounds] When used directly, the measure is formatted in the en-GB locale, but when aliased, the measure falls back to using the cube default of en-US. Here's what the query looks like: WITH MEMBER [Measures].[Pounds Indirect] AS [Measures].[Net Pounds] SELECT { [Measures].[Pounds Indirect], [Measures].[Net Pounds] } ON AXIS (0) FROM [Cube] CELL PROPERTIES language, value, formatted_value The query returns the expected two cells, but only uses the [Net Pounds] locale when used directly. Is there an option or switch somewhere in SSAS that will allow locale information to be visible in calculated members? I realise that it is possible to declare the inline calculated member in a particular locale, but this would involve extracting the locale from the tuple first, which (since the cube's member is isolated in the application's query schema) is unknown.

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Raid1+0: create stripe over two /dev/mdx on partition or not?

    - by Chris
    Given that I haven't found a way to define how a Raid10 is created with mdadm, i went the Raid1+0 solution. How to display/define Mirror/Stripping pairs with mdadm mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdf1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1 mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 My question is about the stripe. For the mirror I create a primary partition over the full HD and set partition type to FD. So, should I do the same for the Stripe? Create partition on /dev/md0 and /dev/md1 (primary over full 'HDD', set partition type correctly) and then do the stripe on the partition? Is there a correct way here or are there any advantages/disadvantages to a solution? Thank you

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Properly using subprocess.PIPE in python?

    - by Gordon Fontenot
    I'm trying to use subprocess.Popen to construct a sequence to grab the duration of a video file. I've been searching for 3 days, and can't find any reason online as to why this code isn't working, but it keeps giving me a blank result: import sys import os import subprocess def main(): the_file = "/Volumes/Footage/Acura/MDX/2001/Crash Test/01 Acura MDX Front Crash.mov" ffmpeg = subprocess.Popen(['/opt/local/bin/ffmpeg', '-i', the_file], stdout = subprocess.PIPE, ) grep = subprocess.Popen(['grep', 'Duration'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) cut = subprocess.Popen(['cut', '-d', ' ', '-f', '4'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) sed = subprocess.Popen(['sed', 's/,//'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) duration = sed.communicate() print duration if __name__ == '__main__': main()

    Read the article

  • VMR9Allocator (DirectShow .NET + SlimDX)

    - by faulty
    I was trying to convert and run the VMR9Allocator sample for DirectShow .NET with SlimDX instead of MDX. I got an exception when it reach this line return vmrSurfaceAllocatorNotify.SetD3DDevice(unmanagedDevice, hMonitor) In the AdviseNotify method in Allocator.cs. The exception is "No such interface supported", and the hr return was "0x80004002". The sample runs fine with MDX, and my SlimDx is also working, as I've written another 3d apps using it, working fine. I can't seems to find out what went wrong, no help from googling as well. Apparently not much ppl uses this combination, and non that i can find actually stumble into this problem. Any idea guys? NOTE: I've asked the same question over at gamedev.net 2 weeks back, no answer thus far.

    Read the article

  • SSAS Reporting Services - Set specific language / translation

    - by Chris
    Hi all, in the data warehouse there's a default language for the measures, and I added a translation for German captions. In a Visual Studio Report Server project, when creating a query with my German OS, the cube and its measures are displayed in German language. When dragging measures to the mdx query windows, the default measure name is used. That's what I want and what I expect, since when writing MDX queries I would like to use the default measure names. But when executing the query, the columns created for each measure is translated to German again. This resuls in having German columns names within my dataset, which I dont want. I'd like to have the english column names. I already tried to change the connection string to: Data Source=server;Initial Catalog=DataWarehouse;LocaleIdentifier=1033 But that doesn't help, I still see German translations. Anyone knows how to set a specific translation?

    Read the article

  • Microsoft PowerPivot for Excel 2010 – book coming in September

    - by Marco Russo (SQLBI)
    As you might already know, I and Alberto Ferrari are writing a book about PowerPivot 2010 for Excel. The official title is Microsoft PowerPivot for Excel 2010: Give Your Data Meaning and you can already order it on Amazon ! However, it will be published in September 2010, and it is reasonable considered we are still in writing mode… Well, before buying it, consider that we are writing the book for the “real user” of PowerPivot, who doesn’t have a knowledge of MDX, multidimensional databases, ETL,...(read more)

    Read the article

  • #DAX Query Plan in SQL Server 2012 #Tabular

    - by Marco Russo (SQLBI)
    The SQL Server Profiler provides you many information regarding the internal behavior of DAX queries sent to a BISM Tabular model. Similar to MDX, also in DAX there is a Formula Engine (FE) and a Storage Engine (SE). The SE is usually handled by Vertipaq (unless you are using DirectQuery mode) and Vertipaq SE Query classes of events gives you a SQL-like syntax that represents the query sent to the storage engine. Another interesting class of events is the DAX Query Plan , which contains a couple...(read more)

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Thinking in DAX: Counting Products in the Current Status with PowerPivot

    - by AlbertoFerrari
    One of my readers came to me with an interesting formula to compute in PowerPivot. Even if I don’t normally post about very specific scenarios, I think this time it is interesting to write a blog post since the formula can be easily created, if you think at it in DAX, while it is very hard if you are still approaching it with an MDX or SQL mindset. Thinking in DAX is something that comes after a lot of formula authoring, something that all BI professionals should strive for, as Vertipaq in the new...(read more)

    Read the article

  • DAX editor for SQL Server

    - by Davide Mauri
    One of the major criticism to DAX is the lack of a decent editor and more in general of a dedicated IDE, like the one we have for T-SQL or MDX. Well, this is no more true. On Codeplex a very interesting an promising Visual Studio 2010 extension has been released by the beginning of November 2011: http://daxeditor.codeplex.com/ Intellisense, Syntax Highlighting and all the typical features offered by Visual Studio are available also for DAX. Right now you have to download the source code and compile it, and that’s it!

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Use old raid drive as boot device without data loss

    - by Gabriel
    There were two disks in sw-raid. There were /dev/md1 as swap, /dev/md2 as boot and a /dev/md3 with ext4. The sw-raid was disabled by stopping and removing mdadm and then zeroing the superblock on each /dev/mdX partition with: sudo mdadm --zero-superblock /dev/sda1 sudo mdadm --zero-superblock /dev/sda2 sudo mdadm --zero-superblock /dev/sda3 In the disk that is the first boot device, I don't know if it's relevant, the system type of each partition was set back from fd to 82 or 83 with fdisk, /etc/fstab was updated, changing /dev/mdX to /dev/sdaX, and grub was reinstalled on the boot partition (/dev/sda2) with grub-instal. But the system wont boot. What else should I do to use this disk as the boot device without reinstall or data loss? Current output of fdisk Device Boot Start End Blocks Id System /dev/sda1 2048 33556480 16777216+ 82 Linux swap / Solaris /dev/sda2 * 33558528 34607104 524288+ 83 Linux /dev/sda3 34609152 3907027120 1936208984+ 83 Linux With it doesn't boot I mean that it stops in the grub console (with the grub> symbol). A ls command says: (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1) It's weird because hd1 was formatted with ext4...

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >