Performance tuning of Microstrategy Report



Performance tuning is always depends on the system and the users. So all can use different techniques on case by case basis. 

If report taking longer time to run then before checking other things, first check the report SQL and try to run the same in the database .If database and MSTR both taking same amount of time , then you need not to bother about MSTR . Consult with DBA and try to optimize the query at database level by using different techniques such Indexing, partitioning ,use aggregate table .
As far as improvement concern, the easiest one is to make sure that you build it with best possible way in MicroStrategy. 

Points to consider for Tuning the reports.

  •  Always avoid using Custom Group if you can since they often generate multiple passes of SQL and it multiple the run time by each element defined in it.

  •  Use caching instead of showing live data in the report. This can be enabled at Project/Report level.

  •  Template can be used for caching purpose. Suppose you have 4 reports which have common set of attribute and metric then they are being run again and again in database and reducing the performance .For this type of scenario we can create a template having common set of attributes and metrics and use template as a shortcut in the report.

  •  If you are using Netezza then Please set Derived tables instead of Temporary table. It can nearly faster.

  •  Sometime different datatype at database and MicroStrategy  case slow down. Common examples would be Date attribute that are defined as datetime in the database but there do not have any time associated with it and are defined as Date in the MicroStrategy .Because of this query always do a convert() on date while joining. In SQL Server Environment that will cause some slow down. In  this case, edit the column Alias to be Date instead of datetime to get the same result ,faster.

  • Try to use caches and I-Cube specially if you have reports that are non-prompted or subscribed. Set a job that will cache them first and subsequently user will hit the shared cache .

  •  If you have very larger table in the warehouse use partitioning to improve the system.

  • Change the VLDB setting . It is available at multiple levels such as Project level, Database Instance level , metric, report, template etc. Among all , Report has highest priority  and It always override the others .Other VLDB Properties are mentioned below.

  •   Database Instance Level

  • Joins -> Cartesian Join Warning

 The default option is to allow reports with Cross Joins to execute, but I find this to be a bad practice.  I prefer to change this setting at the DBI Level to option #3- Cancel Execution only when a warehouse table is involved in either side of the Cartesian join.  The reason I use #3 over #2- Cancel Execution is because I do want to allow temp tables to cross join since that’s how the SQL Engine handles outer joining metrics in some cases.  However, there’s very rarely any reason to cross join warehouse tables.  Instead of allowing a report to run that will either waste system resources or give the user an incorrect result, I’d rather say that  report just fail immediately.  An example of a time where you would want to allow a warehouse table cross join is if you’re using the Report setting to Preserve Attribute Lookup Values.  In order to do this, the Report requires a cross join, so in those cases you can override the DBI Level setting by changing this property to option #1- Execute for that Report only.

  •  Metrics -> Default to Metric Name

This setting is purely personal preference and has no impact on performance.  By default, this option is disabled which gives you metric aliases of WJXBFS (By default you always find this ).  This can make it very difficult to debug the  SQL, especially when using Multipass SQL.  Enabling this option will instead use the name of the metric as the alias. 
  • Metrics -> Metric Join Type
By default, this is set to Inner Join, which means if you have 2 metrics from different passes, any attribute elements they don’t have in common will dropped rows.  I prefer Outer Join as the default here so that I can ensure that I am seeing all of the results.  There can be some cases where this has a negative performance impact, so on poorly performing reports, if I’m positive this won’t result in losing data, I’ll change this back to Inner Join at the Report Level. 
  •  Query Optimization -> MD Partition Prequery Support
If you use Metadata Partition Mapping, I find that I get much better performance from the prequery using the option “Use Constant in prequery” instead of the default  Use Count(*) in prequery.
  •  Select/Insert -> Attribute Form Selection Option for Intermediate Passes
This option will allow for the Description forms(DESC) of Attributes to be selected in individual passes instead of picked up at the end of last pass.  This can save the need for joins at the end .
  • Select/Insert -> Group by Non-ID attribute
This option will not allowed the other form except ID form  to include in the group by clause .
  •  Select/Insert -> Custom Group Interaction With Report Filter
This option controls whether the Report Filter is applied to the filters contained in a Custom Group.  I want this setting the majority of the time, and if there is a specific case when I don’t, then I can override this setting at the Report Level.

Few More VLDB setting we can try .



Property
Default Value
Changed To
Comments
Indexing > Intermediate Table Index
Don't create an index (default)
Create only secondary index on intermediate table
This enables creation of index on Intermediate Temporary tables. The indexes created on the intermediate tables are useful for improving query execution for the final pass where all intermediate tables are joined.
Indexing > Index Post String
Nothing
GLOBAL PARTITION BY HASH (!a) nologging
When index is created on temp table, it will perform hash partition on all columns (!a in the string) of temp table and row level transaction logs will not be stored
Query Optimizations > Data Population for reports
Do not normalize report data (default)
Normalize report data in Intelligence Server
Note on Report Normalization: When a report is executed, the description information for the attributes (all data mapped to non-ID attribute forms) included on the report is repeated for every row. For example, a report includes the attributes Region and Store, with each region having one or more stores. Without performing normalization, the description information for the Region attribute would be repeated for every store. If the South region included five stores, then the information for South would be repeated five times.
You can avoid this duplication of data by normalizing the report data. In this scenario, the South region description information would only be stored once even though the region contains five stores. While this saves memory resources, the act of normalization requires some processing time.

Intelligence Server performs the report data normalization. This typically processes the normalization faster than the other normalization options, but also requires memory resources of Intelligence Server. This is a good option if report performance is the top priority.
Select/ Insert > Attribute Form Selection Option for Intermediate Pass


Select/ Insert > Attribute Selection Option for Intermediate Pass
Select ID Form only





Select only the attributes needed
Select ID and other forms if they are on template and available in existing join tree

Select other attributes in current join tree if they are on template and their child attributes have already been selected.
Normally, the MicroStrategy SQL Engine selects the minimum number of columns that are needed in each pass. For an intermediate pass, the SQL Engine usually only selects attribute ID forms. The SQL Engine also selects the attributes necessary to make the join, usually key attributes. Then in the final pass, additional attributes or attribute forms that are necessary for report display can be joined.
This algorithm is optimal in most cases, as it minimizes the size of intermediate tables. However, in certain schemas, especially denormalized ones, and schemas that use fact tables as both lookup tables and relationship tables, such an algorithm may cause additional joins in the final pass.

These properties intend to use bigger (wider) intermediate tables
to save additional joins in the final pass and exchange space for time.
 

Comments