Search Results

Search found 422 results on 17 pages for 'marco demaio'.

Page 3/17 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • utf-8 convertion doesn't work always

    - by Marco Piccinni
    I searched into other stack before to type here and I didn't find anythong similar. I have to scrape different utf-8 webpages which contain text like "Oggi è una bellissima giornata" the problem is on the characther "è" I extract this text with jtidy and xpath query expression and I convert it with byte[] content = filteredEncodedString.getBytes("utf-8"); String result = new String(content,"utf-8"); where filteredEncodedString contains the text "Oggi è una bellissima giornata". This procedures works on the most webpages analyzed so far but in some case it doesn't extract a utf-8 string. Page encoding is always the same as the text is similar. Any ideas about the problem? thanks Marco

    Read the article

  • eeePC 1001HA/1101HA max resolution when connected to external display?

    - by Marco Demaio
    Hello, I would like to buy a new Eee PC 1001HA or 1101HA. I know the max display resolution is: 1024x600 for eeePC 1001 and 1366x768 for eeePC 1101 But what's the max resolution of the graphic board when connecting these two types of eeePC to an external LCD monitor??? Let's say the external LCD monitor supports a full HD resolution of 1920x1080, are these eeePC graphic boards able to go up to such resolution??? It's really incredible to me, how such a useful information is missing everywhere on every ASUS website. Eee PC are very well suited to be connected to external monitor, so I can't believe how difficult is to find out this information. I downloaded aldo the manual, but it's not in there too. So I was hoping somone has got one and knows the answer. Thanks!

    Read the article

  • What resolution can the eeePC 1001HA/1101HA handle with an external monitor plugged in?

    - by Marco Demaio
    I would like to buy a new Eee PC 1001HA or 1101HA. I know the max display resolution is: 1024x600 for eeePC 1001 1366x768 for eeePC 1101 But what's the maximum resolution of the graphic board when connecting these two computers to an external LCD monitor? Let's say the external monitor supports a full HD resolution of 1920x1080. Are these eeePC graphic boards able to go up to such resolution? It's really incredible to me how such a useful piece of information is missing everywhere on every ASUS website. EeePCs are very well suited to be connected to external monitor, so I can't believe how difficult is to find out this information. I downloaded the manual, but it's not in there either. So I'm hoping somone has got one and knows the answer.

    Read the article

  • DNS PTR record when domain on shared IP address

    - by Marco Demaio
    Hello, I own a typical shared IP hosting plan and domain. I can modify the DNS of the domain from the control panel. The mailserver shares the same IP address, so my typical DNS config is: www.mydomain.com A -> IP mydomain.com A -> IP ftp.mydomain.com A -> IP mail.mydomain.com A -> IP mydomain.com MX(10) -> IP I read some Q&A on this site where they suggest to add PTR record mainly for mailserver. I would like to add PTR record to my domain, I have got two questions: 1) can PTR record be added even if the hosting/mailserver are on a shared IP address? Or do I need a dedicated IP. 2) How do I setup PTR record, I mean does it look like A record: mydomain.com (PTR) -> myip

    Read the article

  • Can many addon domains slow down cPanel or create problems?

    - by Marco Demaio
    I actually resell hosting plans by using one single cPanel account and many addon domains under it. Basically for each new user I don't create a new cP account, but I simply create a new ADDON domain and give him the necessary space. I know in this way the final user won't be able to manage his emails and he won't be able to access all cPanel features, but that's ok. I know in my account I'm allowed to add unlimited addon domains, but adding many addon domains is there something that might happen that slows down cPanel or could create problems? Any suggestions you could give me about the way i'm using cPanel which might not be the usual way of using it. (As for instance: "be aware that mails from different addon domains under the same CP account could create many problems.", etc.) Thanks!

    Read the article

  • Can many addon domains slow down cPanel or create problems?

    - by Marco Demaio
    I actually resell hosting plans by using one single cPanel account and many addon domains under it. Basically for each new user I don't create a new cP account, but I simply create a new ADDON domain and give him the necessary space. I know in this way the final user won't be able to manage his emails and he want be able to access all cPanel features, but that's ok. My only question is: "is there a limit on the number of cPanel addon domain that can be added?". I know in my account I'm allowed to add unlimited addon domains, but is there something that might happen that slows down cPanel or could create problems? I mean is there any suggestions you could give me about the way i'm using cPanel which might not be the usual way of using it. (As for instance: "be aware that Awstast could run very slow or crash", or "be aware that mails from different addon domains under the same cp account could create many problems.", etc.) Many thanks!

    Read the article

  • Awstats showing strange 404 referrers

    - by Marco Demaio
    When I look at Awstats 404 errors I see sometimes strange referrers. For example on www.mydomain.com I might see a 404 error reported in Awstats that says: URL (not found) Referrers some-file.jpg http://www.mydomain.com/some-page.html some-file.jpg is a file that does not exist, so it's not strange that if someone tried to reach it got back a 404 from server. The strange part is that the referring page DOES NOT EXIST TOO, I mean http://www.domain.com/some-page.html DOES NOT EXIST, so how could it be the referrer? Is it some client cheating the referrer? Thanks!

    Read the article

  • ping alternative to measure routing distance (on Windows)

    - by Marco Demaio
    Hello, in order to measure aprroximately the rouitng distance (to see if a server is close to my country or too far away) I usually use ping command. I'm in Italy, when I ping Italian servers I get 36ms when I ping US EAST servers I get an average of 120ms when I ping US WEST servers I get an average of 200ms etc. Unfortunately some web hosters turn off the ping reply on their servers, so my question is how do I detect the routing distance, is there another easy to use command in Windows to accomplish the same task? Thanks!

    Read the article

  • Average mail quota usage: tricks to implement unlimited email quota.

    - by Marco Demaio
    I suppose that hosters who provides unlimited mail quota are only claiming it unlimited, and hope that they won't run out of disk space. Correct me if I'm wrong. In order to do such trick they will have probably to calculate the average real quota used by the average user. Let's say on a 100 GB space hosting I offer to 20 x 1GB emails, obviously if all user fill their mail my server would stop working cause they would require 200 GB, but I think I can expect this trick to work cause it will never happen (or it's extermly unprobable) that all user fills up all their mails. But the QUESTTIONS are: What's the average email usage? Can we say that a user normally fills up 1/2 or 1/3 of the quota you provide him? Thanks to any answers/suggetions you might provide.

    Read the article

  • Server speed: sharing one script.php or using many copies the same script.php

    - by Marco Demaio
    Let's assume: I have thousands of domains on same Apache server. Each domain is in a folder under server public_html document folder, so it can be accessed by calling "www.somedomain.com" or by calling "www.serverdomain.com/somedomain_folder" In each domain there is a website who needs a certain script.php (identical for each domain). From a coding point view, its obvious that it's better to use a unique script.php, so when i update it with new features/bug fixes etc, I need to update on server only one file and it will work for all domains. But from a server point of view? If i use a unique script all domains will access it at the same time, will the server run slower compared to the situation where each domain called its own script?

    Read the article

  • Silverlight 4. Activator.CreateInstance uses a huge amount of memory

    - by Marco
    Hi, I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue? Thans in advance Marco

    Read the article

  • OPTICS Clustering algorithm. How to get the best epsilon

    - by Marco Galassi
    I am implementing a project which needs to cluster geographical points. OPTICS algorithm seems to be a very nice solution. It needs just 2 parameters as input(MinPts and Epsilon), which are, respectively, the minimum number of points needed to consider them as a cluster, and the distance value used to compare if two points are in can be placed in same cluster. My problem is that, due to the extreme variety of the points, I can't set a fixed epsilon. Just look at the image below. The same points structure but in a different scale would result very different. Suppose to set MinPts=2 and epsilon = 1Km. On the left, the algorithm would create 2 clusters(red and blue), but on the right it would create one single cluster containing all of the points(red), but I would like to obtain 2 clusters even on the right. So my question is: is there any kind of way to calculate dynamically the epsilon value to get this result? Thank you very much and excuse my for my poor english. Marco

    Read the article

  • close file with fopen() but file still in use

    - by Marco
    Hi all, I've got a problem with deleting/overwriting a file using my program which is also being used(read) by my program. The problem seems to be that because of the fact my program is reading data from the file (output.txt) it puts the file in a 'in use' state which makes it impossible to delete or overwrite the file. I don't understand why the file stays 'in use' because I close the file after use with fclose(); this is my code: bool bBool = true while(bBool){ //Run myprogram.exe tot generate (a new) output.txt //Create file pointer and open file FILE* pInputFile = NULL; pInputFile = fopen("output.txt", "r"); // //then I do some reading using fscanf() // //And when I'm done reading I close the file using fclose() fclose(pInputFile); //The next step is deleting the output.txt if( remove( "output.txt" ) == -1 ){ //ERROR }else{ //Succesfull } } I use fclose() to close the file but the file remains in use by my program until my program is totally shut down. What is the solution to free the file so it can be deleted/overwrited? In reality my code isn't a loop without an end ; ) Thanks in advance! Marco

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • SSIS packages incompatibilities between SSIS 2008 and SSIS 2008 R2

    - by Marco Russo (SQLBI)
    When you install SQL 2008 R2 workstation components you get a newer version of BIDS (BI Developer Studio, included in the workstation components) that replaces BIDS 2008 version (BIDS 2005 still live side-by-side). Everything would be good if you can use the newer version to edit any 2008 AND 2008R2 project. SSIS editor doesn't offer a way to set the "compatibility level" of the package, becuase it is almost all unchanged. However, if a package has an ADO.NET Destination Adapter, there is a difference...(read more)

    Read the article

  • How to install SpeedFiler on Outlook 2010 (aka Outlook 14)

    - by Marco Russo (SQLBI)
    This is off-topic here on SQLBlog, I know, but I think there will be many users like me wanting to find the solution for this problem. If you have SpeedFiler there is a problem installing it on Outlook 2010. The setup of SpeedFiler stop showing this message: SpeedFiler 2.0.0.0 works with the following products Microsoft Office Outlook 2003 Microsoft Office Outlook 2007 None of these products seems to be installed on your system. SpeedFiler will not be installed. Well, in reality SpeedFiler works...(read more)

    Read the article

  • Difference between LASTDATE and MAX for semi-additive measures in #DAX

    - by Marco Russo (SQLBI)
    I recently wrote an article on SQLBI about the semi-additive measures in DAX. I included the formulas common calculations and there is an interesting point that worth a longer digression: the difference between LASTDATE and MAX (which is similar to FIRSTDATE and MIN – I just describe the former, for the latter just replace the correspondent names). LASTDATE is a dax function that receives an argument that has to be a date column and returns the last date active in the current filter context. Apparently, it is the same value returned by MAX, which returns the maximum value of the argument in the current filter context. Of course, MAX can receive any numeric type (including date), whereas LASTDATE only accepts a column of type date. But overall, they seems identical in the result. However, the difference is a semantic one. In fact, this expression: LASTDATE ( 'Date'[Date] ) could be also rewritten as: FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) LASTDATE is a function that returns a table with a single column and one row, whereas MAX returns a scalar value. In DAX, any expression with one row and one column can be automatically converted into the corresponding scalar value of the single cell returned. The opposite is not true. So you can use LASTDATE in any expression where a table or a scalar is required, but MAX can be used only where a scalar expression is expected. Since LASTDATE returns a table, you can use it in any expression that expects a table as an argument, such as COUNTROWS. In fact, you can write this expression: COUNTROWS ( LASTDATE ( 'Date'[Date] ) ) which will always return 1 or BLANK (if there are no dates active in the current filter context). You cannot pass MAX as an argument of COUNTROWS. You can pass to LASTDATE a reference to a column or any table expression that returns a column. The following two syntaxes are semantically identical: LASTDATE ( 'Date'[Date] ) LASTDATE ( VALUES ( 'Date'[Date] ) ) The result is the same and the use of VALUES is not required because it is implicit in the first syntax, unless you have a row context active. In that case, be careful that using in a row context the LASTDATE function with a direct column reference will produce a context transition (the row context is transformed into a filter context) that hides the external filter context, whereas using VALUES in the argument preserve the existing filter context without applying the context transition of the row context (see the columns LastDate and Values in the following query and result). You can use any other table expressions (including a FILTER) as LASTDATE argument. For example, the following expression will always return the last date available in the Date table, regardless of the current filter context: LASTDATE ( ALL ( 'Date'[Date] ) ) The following query recap the result produced by the different syntaxes described. EVALUATE     CALCULATETABLE(         ADDCOLUMNS(              VALUES ('Date'[Date] ),             "LastDate", LASTDATE( 'Date'[Date] ),             "Values", LASTDATE( VALUES ( 'Date'[Date] ) ),             "Filter", LASTDATE( FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) ),             "All", LASTDATE( ALL ( 'Date'[Date] ) ),             "Max", MAX( 'Date'[Date] )         ),         'Date'[Calendar Year] = 2008     ) ORDER BY 'Date'[Date] The LastDate columns repeat the current date, because the context transition happens within the ADDCOLUMNS. The Values column preserve the existing filter context from being replaced by the context transition, so the result corresponds to the last day in year 2008 (which is filtered in the external CALCULATETABLE). The Filter column works like the Values one, even if we use the FILTER instead of the LASTDATE approach. The All column shows the result of LASTDATE ( ALL ( ‘Date’[Date] ) ) that ignores the filter on Calendar Year (in fact the date returned is in year 2010). Finally, the Max column shows the result of the MAX formula, which is the easiest to use and only don’t return a table if you need it (like in a filter argument of CALCULATE or CALCULATETABLE, where using LASTDATE is shorter). I know that using LASTDATE in complex expressions might create some issue. In my experience, the fact that a context transition happens automatically in presence of a row context is the main reason of confusion and unexpected results in DAX formulas using this function. For a reference of DAX formulas using MAX and LASTDATE, read my article about semi-additive measures in DAX.

    Read the article

  • The right way to start out in game development/design [closed]

    - by Marco Sacristão
    Greetings everyone I'm a 19 year old student looking for some help in the field of game development. This question may or may not seem a bit overused, but the fact is that game development has been my life long dream, and after several hours of search I've realized that I've been going in circles for the past three or four months whilst doing such research on how to really get down and dirty with game development, therefor I decided to ask you guys if you could help me out at all. Let me start off with some information about me and things i've already learned about GameDev which might help you out on helping me out (wordplay!): I'm not an expert programmer, but I do have knowledge on how to program in several languages including C and Java (Currently learning Java in my degree in Computer Engineering), but my methodology might not be most correct in terms of syntax (hence my difficulty in starting out, i'm afraid that the starting point might not be the most correct, and it would deploy a wrongful development methodology that would be to corrected later on, in terms of game development or other projects). I have yet to work in a project as large as a game, never in my learning curve of programming I've done a project to the scale of a video game, only very small software (PHP Front-ends and Back-ends, with some basic JQuery and CSS knowledge). I'm not the biggest mathematician or physicist, but I already know that is not a problem, because there are several game engines already available for use and integration with home-made projects (Box2D, etc). I've also learned about some libraries that could be included in said projects, to ease out some process in game development, like SDL for example. I do not know how sprites, states, particles or any specific game-related techniques work. With that being said, you can see that I have some ideas on game development, but I have absolutely no clue on how to design and produce a game, or even how game-like mechanics work. It does not have to be a complex game just to start out, I'd rather learn the basic of game design (Like 2D drawing, tiling, object collision) and test that out in a language that I feel comfortable in which could be later on migrated to other platforms, as long that what I've learned is the correct way to do things, and not just something that I've learned from some guy on Youtube by replicating that code on the video. I'm sorry if my question is not in the best format possible, but I've got so many questions on my mind that are still un-answered that I don't know were to start! Thank you for reading.

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • #PowerPivot Workshop Online for America’s Time Zones #ppws

    - by Marco Russo (SQLBI)
    After so many request we have finally arranged a PowerPivot Workshop online edition dedicated to America’s time zones! It is scheduled for December 19-20, 2012, with this schedule: US Eastern Time (EST): 10:00am-1:00pm / 2:00pm-5:00pm US Central Time (CST): 9:00am-12:00pm / 1:00pm-4:00pm US Pacific Time (PST): 7:00am-10:00am / 11:00am-1:00pm Bogotá (Colombia): 10:00am-1:00pm / 2:00pm-5:00pm São Paulo (Brazil): 1:00pm-4:00pm / 5:00pm-8:00pm Buenos Aires (Argentina): 12:00pm-3:00pm / 4:00pm-7:00pm...(read more)

    Read the article

  • xVelocity engines compared: VertiPaq vs ColumnStore #ssas #vertipaq #xvelocity #sql #tabular

    - by Marco Russo (SQLBI)
    During the last months I and Alberto worked in several projects using Analysis Services Tabular and we had to face real world issues, such as complex queries, large data volume, frequent data updates and so on. Sometime we faced the challenge of comparing Tabular performance with SQL Server. It seemed a non-sense, because even if the same core xVelocity technology is implemented in both products (SQL Server 2012 uses ColumnStore indexes, whereas Analysis Services 2012 uses VertiPaq), we initially assumed that the better optimization for the in-memory engine used by Analysis Services would have been always better than SQL Server. However, we discovered several important things: Processing time might be different and having data on SQL Server could make ColumnStore way faster for processing. Partitioning in SQL Server might be much more effective for query performance than Analysis Services. A single query can scale easily on more processor on SQL Server, whereas in Analysis Services the formula engine is single-threaded and could be a bottleneck for certain queries. In case of a large workload with many concurrent users, storage engine cache in Analysis Services could be a big advantage over SQL Server, especially for scalability As you can see, these considerations are not always obvious and you might be tempted to make other assumptions based on these information. Well, don’t do that. Before anything else, read the whitepaper VertiPaq vs ColumnStore Comparison written by Alberto Ferrari. Then, measure your workload. Finally, make some conclusion. But don’t make too many assumptions. You might be wrong, as we did at the beginning of this journey.

    Read the article

  • Converting #MDX to #DAX and PowerPivot Workshop online #ppws

    - by Marco Russo (SQLBI)
    I just published the article Converting MDX to DAX – First Steps on the renewed SQLBI web site about converting MDX to DAX. The reason is that with BISM Tabular in Analysis Services 2012 you will be able to write queries in both DAX and MDX. If you already know MDX, you might wonder how to “translate” your MDX knowledge in DAX. I think that this is another way you can improve your knowledge about DAX: it has different concepts behind and this comparison should be helpful in this purpose. This is...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >