Tuesday, October 20, 2015

Working with Undefined (Null) Values in an IBM Cognos TM1Cube

▬▬▬▬▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬▬▬▬▬
In an IBM Cognos TM1 server, by default numeric values are stored as 0.  When the cell content is retrieved either in TI or through rules, a value of zero is returned.  This means TM1 does NOT differentiate between missing data (null value) and zero.  In other words, you will not be able to differentiate the cells for which there does not exist data, with the ones where data is 0. 
To work with situations where we need to differentiate between missing data and zero, we can use UNDEFVALS in the rule of an IBM Cognos TM1 cube.  If UNDEFVALS is used then the default value is changed from 0 to a special undefined value.

Let’s create a simple 3 dimensional cube available from Planing Sample database, namely:
  1. plan_version
  2. plan_business_unit
  3. plan_department
We will be working with integer data alone.  The cube looks like this, without any rules;


Let us go ahead and add this as the 1st line of the rules file and save the rules.  Notice the view

 
All the Zeroes have vanished and instead you will see blanks.  At the intersection of UK and Direct, key a value of 0, turn on zero suppression and recalculate the view.  It will appear like this:


03_UKDirectZero

 
It means that, zero suppression will actually turn off the special undefined value, instead of zeroes.  It is a side effect of using UNDEFVALS in the rule definition.  This unintended consequence is visible in Turbo Integrator (TI) as well.

 

If you were to save this view and use it as source in TI (see screen for the view spec), then perform an AsciiOutput of the data records, you will get zeroes as part of your data.

How to reset data in a cube with UNDEFVALS?

In a cube without UNDEFVALS, if there is a value in an intersection of a cube and you hit delete, then the intersection value is reverted to zero.  Let us consider ways to accomplish this in a cube with UNDEFVALS.
Manual Option
This is applicable only if you have limited cells to work with.  As mentioned in this IBM KB, by inputting a value of 4.94066e-324 in the intersections.  To illustrate this, look at the screenshot below.  The LHS of the picture shows where a value of zero was present.  And on the RHS by inputting the aforementioned value and hitting recalc, the  zeroes goes away.



Turbo Integrator (TI) Option
  • CubeClearData – This will reset all the non-rule driven intersections to the undefined/null value
  • Sometimes however you may want to reset only part of cube and not the entire cube.  Our instinct is to use ViewZeroOut.  However ViewZeroOut on an UNDEFVALS cube will only ZERO out the intersections, it will NOT reset the intersections to undefined/null value
  • In such situations, you can use CellPutN a value of 4.94066e-324 in those intersections and reset the data to undefined/null value
  • Alternately you can use new function in 10.2 UndefinedCellValue for this case (valid in both rules and TI).  This function returns either 0 (on a cube without Undefvals declaration) or special undefined value (on a cube with Undefvals declaration) – see below snippet
vx_UndefValue = UndefinedCellValue (vs_CubeName);
CellPutN (vx_UndefValue, vs_CubeName, ‘FY 2003 Budget’, ‘UK’, ‘Finance’);

10.2 Reference Guide documentation incorrectly states that cube name is an optional parameter; APAR PI50000 is raised to correct it.  If the cube name is not supplied then the function returns 0.

How to check for an undefined/null value in cube?

Foremost of all, to know which cubes have UNDEFVALS declaration, you can loop through the }Cubes dimension and pass it to UndefinedCellValue function; an AsciiOutput of this call will list such cubes.  This is possible only in 10.2.

Well, let’s suppose you do have a cube with UNDEFVALS.  When you want to work with such cubes, you’d like to know whether or not, given intersection has undefined/null value in it.  To do that you can utilize the ISUNDEFINEDCELLVALUE function (valid in both rules and TI)

IsUndefinedCellValue function compares the passed value to the cube’s default view and returns 1 if true; otherwise it returns 0.  Now let’s go ahead and add below rule line
Undefvals;
[‘Germany’] = N:
    IF ( IsUndefinedCellValue ( [‘UK’] ) = 1    
        , UndefinedCellValue    
        , 88   
        );
I am trying assign a value of 88 for all cells of Germany, where corresponding cells of UK have a valid value (including 0).  With some sample values assigned, a view of the Cognos TM1 cube with UNDEFVALS will look like below (please note Marketing and Engineering are NOT the children of Sales, they are children of Total Org):


Let’s now add Skipcheck and Feeders to the rules file to see its impact with zero suppression on the cube view:
SKIPCHECK;
Undefvals;
    
[‘Germany’] = N:
    IF ( IsUndefinedCellValue ( [‘UK’] ) = 1    
        , UndefinedCellValue    
        , 88    
        );
FEEDERS;
[‘UK’] => [‘Germany’];


Notice that a value of zero (UK, IT) is able to FEED the (Germany, IT) intersection.  This is one of the impact of using UNDEFVALS in the cube.

If you want to find out the actual value of the special undefined value that IBM Cognos TM1 server assigns to your cube, then you can write a small TI snippet to find that out.  See screenshot below:



Catalogue your dimension creation processes in Cognos TM1

Are you at the helm of maintaining a very large TM1 model?  Or did you inherit a TM1 model and want insight into the system?  Say you have a new person on the team and he/she is not sure which process creates what dimension!

Well, the below post will help you make your job little easier, at least when it comes to dimension creation/updation. We are going to leverage the }DimensionAttributes control cube for a neat trick to map dimensions with their create/update processes.  This control cube stores attributes for all the dimensions that appear in your TM1 server.

Let’s see, a large TM1 model includes several dimensions – most of which are created through TI processes.  This is true in many implementations, since metadata for your model is sourced from external systems, which feed into TM1.  At the end of day, your Architect will look like this –



Right click on “Dimensions” -> Choose “Edit Attributes


From “Edit” menu, choose “Add New Attribute


Type “CreatedBy” as attribute name of Type “Text” and click OK.  Your screen should now look like:



You will see the newly created attribute along with others that currently exist on your system.  Click OK.  Go back to Architect


When we created a new attribute “CreatedBy” for the dimensions, IBM Cognos TM1 behind the scenes updates a control cube }DimensionAttributes.  You can choose to display control objects in your Architect and see how the cube looks like.  All that is now left, is to modify your dimension creation process and add a small piece of code in the Epilog.  To illustrate this, I will use a text file as source and create a dimension “A_Sample_Dim”.  Text file has following data in it:
ElemName, ParentName
Direct, Sales
Indirect, Sales
PSO, Sales
Sales, TOTAL ORG
Maketing, TOTAL ORG
Engineering, TOTAL ORG
GA, TOTAL ORG
Finance, GA
IT, GA
Administration, GA
Create a new process to use this as source for your Turbo Integrator process and choose appropriate fields as shown below:



Go to Variables tab and in “Contents” choose as “Other” for both the variables.  Below is the code in various tabs

PROLOG
vs_DimName = ‘A_Sample_Dim’;
IF (DimensionExists (vs_DimName) = 0 );
    DimensionDestroy (vs_DimName);
ENDIF;
DimensionCreate (vs_DimName);
METADATA
DimensionElementInsert (vs_DimName, ”, ElemName, ‘N’);
DimensionElementInsert (vs_DimName, ”, ParentName, ‘N’);
DimensionElementComponentAdd (vs_DimName, ParentName, ElemName, 1);
This is the normal dimension creation process that most developers follow.  As mentioned before, we will need to add a small piece of code in the Epilog with following contents in it:

vs_CubeName = ‘}DimensionAttributes’;
vs_TIName = GetProcessName;
CellPutS (vs_TIName, vs_CubeName, vs_DimName, ‘CreatedBy’);

Save and run your code.  You should now see the value in Architect.  This approach has to be adopted for all the dimension creation/updation process.


Closing Thoughts:

  1. Approach to populate the }DimensionAttributes cube to note the processes that build your TM1 model dimensions will go in a long way, for your ongoing support
  2. This approach need not be limited to process creation alone, you could extend this to various attributes that you could think of, on a given dimension.  For ex: owner, updation frequency, source etc.  All you’d need is additional attributes to capture them
ps: Help give feedback on the content.  At the top of the post (just below title), indicate your rating.  I’d appreciate that!

Wednesday, September 16, 2015

Manage User Sessions on your IBM Cognos TM1 Server

Being a TM1 administrator, at times we are required to manage user sessions on the system.  Some activities include finding active users or user activity on the system.  And in certain times, disconnect users as part of ongoing maintenance/system upkeep.

Track User activity

There is a mechanism available to track current active users on the TM1 instance.  However, if you want to see who all have logged on through the course of the day, or last few days, then you can enable the LOGIN Logger.  With 10.2.2 FP1 there are new loggers available.  One of them is TM1.Login, which will write the login and logout of all users into TM1Server.log file
If you already have a tm1s-log.properties file, then you’d need to do add this line.  If you don’t have this file, you can locate one in the directory of each sample TM1 database.  Location of the tm1s-log.properties should be in the same directory as that of the tm1s.cfg
log4j.logger.TM1.Login=DEBUG
Once done, you will see entries like below in the tm1server.log file, which will continually record the login/logout session of the users.  One fantastic thing is that, if you have BI report that uses TM1 as data source, then this logger will help you track the report executions as well.

 

Find Active Users

This is a fairly straightforward task, which will involve setting up a parameter in the tm1s.cfg file.  The parameter name is ClientPropertiesSyncInterval and it is dynamic.  This property tells us the frequency (in seconds), at which Clients’ properties in the Control Cube }ClientProperties is updated.  The example below will update every 5 minutes (300 seconds)
ClientPropertiesSyncInterval=300


Once you have done this setting, you can now open the control cube }ClientProperties and see the active users.  If this cube is not visible then, you would need to turn on the “Display Control Objects” from View menu.  Select STATUS as the element of interest, turn on zero suppression and it will now list active users on your instance.  It will not show the users who were active earlier and are now logged out.

Disconnect Users

There are situations that sometime leads you to disconnect users from the system:
  • During system maintenance, routine downtime etc
  • A developer has kicked off a TI and it can not be cancelled through TM1 Top or Operations console, then you can disconnect the user
To do this:
  1. Right click on your server -> Choose ‘Server Manager
  2. Choose option ‘Disconnect Clients‘, enter number of minutes
  3. Click on ‘Select Clients‘.  it will open up }Clients dimension.  Turn on the alias }TM1_DefaultDisplayValue (CAMIDs listed will not tell who the user is).  Click OK
  4. Click OK again … this will disconnect selected users from the TM1 instance.  You can verify this from your TM1 top session or Operations Console
Reference
TM1 Taking a Peek Under the Covers, and Learning how to Troubleshoot (Session #1169 @ IBM Vision 2015)

 

Tuesday, September 8, 2015

Cognos TM1 Chores: Single Commit vs Multiple Commit


A chore in IBM Cognos TM1 server is a container for a group of one or more Turbo Integrator (TI) Processes.  It defines the sequence in which the processes are executed. These processes inside of a chore are executed sequentially – in other words, the 2nd process in a chore is executed after the 1st one finishes; 3rd process is executed after the completion of the 2nd one … and so on.
Chores can also be used to run the same process multiple times – in such scenarios, the parameters passed to the process will be different.
  • Chores once activated (enabled) run periodically on the set schedule.  When the chore is run from a schedule, it runs on its own thread.
  • Alternately, you could right click on the chore and run it on demand
Irrespective of how the chore is run, all the TI processes are embedded in one transaction.  This means that any locks acquired by the 1st process are held onto, until the last process is completed.  Therefore any data updates done by the group of TIs in the chore are committed, only when the last process is completed.  This is the default behavior of Cognos TM1 server.

Starting from 10.1, we now have a provision to commit each TI in a chore as and when it is processed.  In such scenarios, the locks are not held onto, until the last process is completed.  Therefore, the data modified by a TI in chore is committed and locks are released, when its execution within the chore is complete.

Let’s look at an example, which demonstrates the difference between the 2 options – Single Commit versus Multiple Commit.  To illustrate this, I will use a simple cube, 3 TI processes and 2 chores.




1st TI Process – All it does is increment the value by 10 in the 1st intersection of the cube

vi_Value = CellGetN (vs_CubeName, ‘A’, ‘E1’, ‘zDim_03’);
AsciiOutput (GetProcessErrorFileDirectory | ‘sk03.txt’, ‘Before: ‘ | NumberToString (vi_Value) );
CellIncrementN (10, vs_CubeName, ‘A’, ‘E1’, ‘zDim_03’);
vi_Value = CellGetN (vs_CubeName, ‘A’, ‘E1’, ‘zDim_03’);
AsciiOutput (GetProcessErrorFileDirectory | ‘sk03.txt’, ‘After: ‘ | NumberToString (vi_Value) );

2nd TI Process – This waits for sometime, before incrementing the value by 10 for the aforementioned intersection
vi_SimplyWaitCounter = 10000000 * 2;
WHILE (vi_SimplyWaitCounter > 0);
vi_SimplyWaitCounter = vi_SimplyWaitCounter – 1;
END;
val = CellGetN (vs_CubeName, ‘A’, ‘E1’, ‘zDim_03’);
AsciiOutput (GetProcessErrorFileDirectory | ‘sk01.txt’, ‘Before: ‘ | NumberToString (val) );
CellIncrementN (10, vs_CubeName, ‘A’, ‘E1’, ‘zDim_03’);
val = CellGetN (vs_CubeName, ‘A’, ‘E1’, ‘zDim_03’);
AsciiOutput (GetProcessErrorFileDirectory | ‘sk01.txt’, ‘After: ‘ | NumberToString (val) );

3rd TI Process:  Almost same as 1st process, except that it writes to sk03.txt, instead of sk01.txt and increments intersection value by 5 instead of 10

1st chore: Comprises of Process 1 and Process 2
2nd Chore: Comprises of Process 3 alone

Scenario 1 – Single Commit

Run Chore 1 on demand (from one session).  Run Chore 2 on demand (from another session).  If you open the cube, the value will be 20.  Here’s chart showing what transpired.




  • 1st TI process in Chore 1 (named Process 1) executes and puts value of 10 in the cube. This happens in a fraction of second.  By this time the 2nd TI in the chore 1 (named Process 2) is already running and doing its wait. 
  • Now the 1st TI from Chore 2 (named Process 3) is executed. 
  • When it reads the value from the cube, it reads it as 0.  Although the process 1 from chore 1 is completed, we have used Single Committ, so the committed value is not visible outside of the chore (i.e. in process 3 of chore 2). 
Therefore the final value in the cube is 20 (10 + 10) as opposed to 25 (10 + 5 + 10)!  You can see the cube value and the log entries below:



Scenario 2 – Multiple Commit

Clear the value in the cube.  Then run Chore 1 on demand (from one session).  Run Chore 2 on demand (from another session).  If you open the cube, the value will be 25.  Here’s chart showing what transpired.

  • 1st TI process in Chore 1 (named Process 1) executes and puts value of 10 in the cube. This happens in a fraction of second.  After this the data is committed and the locks are released.  By this time the 2nd TI in the chore 1 (named Process 2) is already running and doing its wait.
  • Now the 1st TI from Chore 2 (named Process 3) is executed.
  • When it reads the value from the cube, it reads it as 10 (instead of 0).  This is because of multi commit
  • After few seconds, when the Process 2 in Chore 2 reads the value, it shows 15 (10 + 5).  It then adds 10 more to it, totaling the result to 25
 


Friday, August 21, 2015

Clean up Log directory in IBM Cognos TM1

As part of development, over a period of time we often see plenty of files created in the logs directory.  Typically these files consists of :
  1. Errors encountered in execution of Turbo Integrator (TI) code
  2. Saved files of tm1s.log (after SaveDataAll has been run or service is recycled)
  3. AsciiOutput / TextOutput files processed as part of TI code for debugging
  4. Files generated as required for upstream/downstream processing
Some sort of a method is required to clean these files periodically.  There is also a possibility of lot of disk space wasted to store these files on the server.


To configure the logs directory, you’d need to set LoggingDirectory parameter in the tm1s.cfg file.  While this parameter is optional, it is recommended to set it to a folder of your choice.  Typically this folder is a sibling to the data directory and my recommendation is that, path mentioned to this logs folder in the tm1s.cfg be relative, instead of fully qualified!
If the parameter is not specified then:
  • By default it will write to the data directory, there by cluttering the directory
  • If there is more than one DataBaseDirectory mentioned in tm1s.cfg, then it is written to the 1st one
So let’s see how a sample log directory looks after few weeks/months.  In our case, there has been not much development activity, since the minor errors are far less in number :)


We built a TI process that is scheduled to run every morning/week before we walk-in that clears up the log file.  Let’s look at the code.  There are 2 parameters for the process:


 ps_SearchStr – You’d need to specify the beginning part of the file name, which needs to be deleted.  Code does NOT use wildcards

pi_Days – Specify number of days.  Any file older than this, and with the above search pattern will be deleted.  Files lesser than this number will be retained.  If you want to delete all the files then enter pi_Days = -1, when running the process
Code here looks assumes that date of the file is embedded in the file name.  It is mainly written to cater deletion of time stamped tm1s logs as well as TM1ProcessError files.  Both of these have date as part of the file name.

Prolog

In this tab, we will delete those files that match the pattern of the 1st parameter (SearchStr) and whose date is greater than 2nd parameter (Days).
vi_StartIdx  = Long (ps_SearchStr) + 1;
vs_FileName  = ”;
vs_FilePath  = GetProcessErrorFileDirectory;
# Search for presence of given file
vs_FileName  = WildcardFileSearch (vs_FilePath | ps_SearchStr | ‘*.log’, ”);
WHILE (vs_FileName @<> ”);
# Get its date and subtract from today’s date.  If more than nn days – specified in the parameter, then delete it
vLogFileDate = DAYNO ( SUBST (vs_FileName, vi_StartIdx, 4) | ‘-‘ | SUBST (vs_FileName, vi_StartIdx + 4, 2) | ‘-‘ | SUBST (vs_FileName, vi_StartIdx + 4 + 2, 2)) ;
vTodayDate   = DAYNO(TODAY(1));
vDifference  = vTodayDate – vLogFileDate;
IF (vDifference > pi_Days);
AsciiDelete (vs_FilePath | vs_FileName);
vs_FileName = ”;
ENDIF;
# Search for the next file
vs_FileName = WildcardFileSearch (vs_FilePath | ps_SearchStr | ‘*.log’, vs_FileName);
END;

Epilog

Code written is mainly to delete other types of that get created.  As an example, all the .txt files created by AsciiOuput / TextOutput that developers put for debugging; perhaps few .csv files, .log files etc.  You’d need to alter the code to suit your needs
# Delete other .log files here, which users/developers would have created that is not required
vs_LogFile1  = ‘TM1ProcessError_’;
vs_LogFile2  = ‘tm1s’;
vs_FileName  = ”;
vs_FilePath  = GetProcessErrorFileDirectory;
vs_FileName  = WildcardFileSearch (vs_FilePath | ‘*.log’, ”);
WHILE (vs_FileName @<> ”);
# Ensure it is not one of the TI errors or the timestamped tm1s.log file
IF (Scan (vs_LogFile1, vs_FileName) = 0 & Scan (vs_LogFile2, vs_FileName) = 0);
AsciiDelete (vs_FilePath | vs_FileName);
vs_FileName  = ”;
ENDIF;
vs_FileName = WildcardFileSearch (vs_FilePath | ‘*.log’, vs_FileName);
END;
# Delete miscelaneous files like .txt, .csv, .cma files from log directory
# Below is an example for .txt alone
vs_FileName  = WildcardFileSearch (vs_FilePath | ‘*.txt’, ”);
WHILE (vs_FileName @<> ”);
AsciiDelete (vs_FilePath | vs_FileName);
vs_FileName = WildcardFileSearch (vs_FilePath | ‘*.txt’, ”);
END;

 


Accomplish Attibute Update in Cognos TM1 Smartly!

Every application involves some kind of metadata build.  Quite often we build the dimension structures and then assign attributes to the elements of dimension.  When we add an attribute to a dimension, TM1 server creates a control cube for it – }ElementAttributes_YOUR_DIMNAMEIt is a 2-dimension cube, one with the dimension in question and another control dimension }ElementAttributes_YOUR_DIMNAME.  

Depending on the attribute type (numeric or string), we use AttrPutN or AttrPutS, to populate the attributes.  I will demonstrate the effect of using AttrPutN / AttrPutS.  Below is the screen shot of the logs directory before we begin any work.


Let us build a very large dimension and populate attributes for it.  You can refer to my erstwhile post on building a large dimension.  The entire code of the process was in Prolog.

Option 1: Use AttrPutS / AttrPutN

Let’s tweak the process and add below code to Epilog
vs_AttrName1 = ‘Description’;
vs_AttrName2 = ‘Caption’;
vs_AttrName3 = ‘Index’;
AttrDelete (vs_DimName, vs_AttrName1);
AttrInsert (vs_Dimname, ”, vs_AttrName1, ‘S’);
AttrDelete (vs_DimName, vs_AttrName2);
AttrInsert (vs_Dimname, ”, vs_AttrName2, ‘S’);
AttrDelete (vs_DimName, vs_AttrName3);
AttrInsert (vs_Dimname, ”, vs_AttrName3, ‘N’);
vi_DimSiz = DimSiz (vs_DimName);
WHILE (vi_DimSiz > 0);
    vs_Elem = DimNm (vs_DimName, vi_DimSiz);
    AttrPutS (‘Desc_’ | vs_Elem, vs_DimName, vs_Elem, vs_AttrName1);
    AttrPutS (‘Capt_’ | vs_Elem, vs_DimName, vs_Elem, vs_AttrName2);
    AttrPutN (vi_DimSiz, vs_DimName, vs_Elem, vs_AttrName3);
    vi_DimSiz = vi_DimSiz – 1;
END;
I am adding 3 attributes (2 of string type and 1 numeric) for testing.  Let’s run the code with following values in parameter:


It took 40.16 seconds (as mentioned in the log file) to complete the process. However tm1s.log size is now increased

  

Let’s delete the dimension and run SaveDataAll before we begin the next step.  Open the tm1s_TIMESTAMP_.log file.  You will see tons of entries where in the system is recording the old attribute value for the 3 attributes we created.

Option 2: Use AttrPutS / AttrPutN and disable cube logging

We will now modify the above code, and disable logging on the attribute cube.  Therefore when an AttrPut call is made, the entry is not logged into the tm1s.log file
vs_AttrName1 = ‘Description’;
vs_AttrName2 = ‘Caption’;
vs_AttrName3 = ‘Index’;
AttrDelete (vs_DimName, vs_AttrName1);
AttrInsert (vs_Dimname, ”, vs_AttrName1, ‘S’);
AttrDelete (vs_DimName, vs_AttrName2);
AttrInsert (vs_Dimname, ”, vs_AttrName2, ‘S’);
AttrDelete (vs_DimName, vs_AttrName3);
AttrInsert (vs_Dimname, ”, vs_AttrName3, ‘N’);
# Disable Cube logging on the attr cube
vs_AttrCubeName = ‘}ElementAttributes_’ | vs_DimName;
vi_OldValue = CubeGetLogChanges (vs_AttrCubeName);
CubeSetLogChanges (vs_AttrCubeName, 0);
vi_DimSiz = DimSiz (vs_DimName);
WHILE (vi_DimSiz > 0);
    vs_Elem = DimNm (vs_DimName, vi_DimSiz);
    AttrPutS (‘Desc_’ | vs_Elem, vs_DimName, vs_Elem, vs_AttrName1);
    AttrPutS (‘Capt_’ | vs_Elem, vs_DimName, vs_Elem, vs_AttrName2);
    AttrPutN (vi_DimSiz, vs_DimName, vs_Elem, vs_AttrName3);
    vi_DimSiz = vi_DimSiz – 1;
END;
# Revert the logging value on attr cube
CubeSetLogChanges (vs_AttrCubeName , vi_OldValue);
The process took 34.78 seconds to finish (saving about 5 seconds) and the tm1s.log file is no longer bloated.

Summary:

  • AttrPutS / AttrPutN are used to populate the attribute value(s) of elements in dimension.  Each call of the AttrPutN / AttrPutS will create an entry in the tm1s.log file.  Therefore a very large dimension with lot of attribute updates, you will likely end up executing for a longer time
  • In addition, the tm1s.log file will be bloated up to almost 1 GB depending the volume of change
  • It is therefore advisable to disable the attribute cube, before performing attribute updates on a dimension
  • The time differences and tm1s.log bloating may not be of an issue, if you are working with dimensions that are small
AttrPutS (‘Happy Smart Attribute updating to you!’, ‘Blog’, ‘Readers’, ‘Message’)

Friday, August 7, 2015

Clearing the air around IBM Cognos TM1’s VMM setting

In case you are rolling your eyes and thinking, why we need yet another blog talking about VMM, you may want to read for some new information.  Let’s first start with what VMM is and how it is useful.
  • VMM stands for View Maximum Memory
  • It is a property that is available to be set on each cube
  • It controls the amount of RAM, which needs to be reserved on the server for purpose of storing view caches (a.k.a Stargate views)
  • If VMM threshold is reached for a given cube, then the oldest views is purged out to make room for newer ones (kind of FIFO – First In First Out)
VMM and its partner in crime VMT (View Minimum Time) together control the Stargate caching.  You can check out the Operations Guide to know more about VMT.  I felt compelled to write about VMM, given the confusion in its documentation.

What’s the default value for VMM?

That is the intent to write this post.  The default value for VMM is 128KB and it is specified in KB. For those of you raising eyebrows and to some who can swear that it is 64KB, take a look at the chronology of events that is causing this confusion:
  • 9.5 & 10.1 documentation (operations guide in particular) says the default value is 65,536 and it is specified in bytes!
  • Come 2012 Apr, an IBM KB (Ref # 7024354) is released that mentions the correct value


 (Fig 1: KB Article showing the correct values)
  • 10.2 and 10.2.2 documentation (operations guide in particular) says the default value is 65,536 and it is specified in bytes!
  • A new KB article (Ref # 1960495) on tips and technique to work with MTQ in TM1, released in 2015 Jul, states that default value is 65KB (not 64KB)!

(Fig 2: An incorrect KB article)


So what is really going on over here?  What is the default value of VMM and how is it expressed?  The correct answer is that:
  • Default value of VMM is 128
  • It is specified in Kilo Bytes.  If you enter VMM for a cube as 1024, then system will reserve a memory in RAM for 1024KB = 1MB, for storing the view caches (Stragate views)
This information is correctly mentioned in the KB Ref # 7024354.  Unfortunately this information has neither trickled down into the operations guide nor subsequent KB articles that refer to the VMM.  Consequently, IBM KB articles, blogs on VMM continue to refer that default value is 64KB or 65KB and is expressed in bytes – which is misleading.

What is the Max Value of VMM?

Ok, if default value is indeed 128KB, then what is the max value.  Per the 10.2.2 operations guide the maximum value that can be assigned to VMM is ~40GB (almost 40 GB).

 (Fig 3: Operations Guide showing max value for VMM)

Again, this is not entirely correct.  Valid range for VMM is 0 2147483647 KB (equates to 2 TB).  This is theoretical limit.  The upper limit is usually constrained by the amount of RAM available on your server; with the assumption that there is sufficient memory available to load other cubes in memory.  IBM KB (Ref # 7024354) article has mentioned in this correctly – refer the 2nd line in Fig 1 above.

An APAR (PI45248) has been raised to correct the default value as well as the upper limit of the VMM.  Hopefully in the future release of the documentation this gets corrected.  You can check the status of the APAR here.