Thursday, June 25, 2015

IBM Cognos TM1 Import source in SPSS reads incorrect data from TM1 Cube - How to fix it?

Last week, I published a post on reading IBM Cognos TM1 data into SPSS Modeler, using the newly available source palette within SPSS Modeler called "IBM Cognos TM1 Import".  One of my colleagues posed a question, where in she was getting incorrect results using Cognos TM1 cube as source in SPSS.  Last digit of the number gets dropped off in SPSS Modeler!




If you are on SPSS Modeler ver 17, there is no need to fret.  This is seen only in SPSS Modeler Ver 16.  There is a well documented KB article on IBM Site - "IBM Cognos TM1 Import source node is reading continuous data incorrectly" (Reference #:1683459).  The article tells us to modify the Process ExportToSPSS to fix the issue
Value = if(VALUE_IS_STRING = 1, SVALUE, NumberToStringEx(NVALUE,'#####0.0##', '.', ','));
This post describes how to make this change in the process.  There are 3 ways to implement this fix:
  1. Use IBM Cognos TM1 Architect
  2. Use IBM Cognos TM1 Performance Modeler
  3. Using Text Editor

Text Editor

You would really need to know what you are doing here.  There are numbers present in the beginning of every line, which TM1 system uses to parse out the code in a TI.  A wrong modification will result in unexpected behavior of the program, including errors.  I would not recommend this approach!

TM1 Performance Modeler

By far the easiest way to fix the code, especially in the current scenario.  I will explain more about using Architect and the problems faced, later.  If you open TM1 Performance Modeler, locate the TI process, double click, edit and save.  That's it.  You are ready to use the modified code in SPSS Modeler.
While the change is easy and swift to make, adding an extra line of code in TM1 Performance Modeler increased the file size from 55KB to 126KB !


TM1 Architect

If you open TM1 Architect, logon to the server, locate the TI and when you double click it, you get 2 error messages right away.  See the screens below:

The reason these messages pop up is because, this TI process uses IBM Cognos TM1 Cube View as data source and there is no cube view defined for it.:

If you click on Variables tab,you will notice there are 128 variables present.  So we will need to create a cube with 127 dimensions (yes that's right 127 dimensions, not 128).  128th variable will be the value (SValue or NValue) of the measure.


Using Architect to modify the process is not straightforward and will involve little bit of workaround.  I will explain that in detail here.  Below are the steps that need to be taken:
  • Create a cube with 127 dimensions
To help you create a cube with 127 dimensions, I have created a TI process.  At this time of writing I am not sure, how to attach a file (other than media) to the blog.  Therefore I am pasting the code below.  In this process, couple of parameters are defined and the code exists only in the Prolog tab.

vs_CubeName = pCubeName;
vi_MaxDims = 127;
IF (CubeExists (vs_CubeName) > 0);
    ItemReject ('Cube already exists. Quitting program with error');
    ProcessError;
ENDIF;
WHILE (vi_MaxDims > 0);
    vs_DimName = pDimName | NumberToStringEx (vi_MaxDims, '000', '', '');
    IF (DimensionExists (vs_DimName) = 0);
        DimensionCreate (vs_DimName);
        DimensionElementInsert (vs_DimName, '', vs_DimName, 'N');
    ENDIF;
    vi_MaxDims = vi_MaxDims - 1;
END;
CubeCreate (vs_CubeName,
    pDimName | '001', pDimName | '002', pDimName | '003', pDimName | '004', pDimName | '005', pDimName | '006', pDimName | '007', pDimName | '008', pDimName | '009', pDimName | '010', pDimName | '011', pDimName | '012', pDimName | '013', pDimName | '014', pDimName | '015', pDimName | '016', pDimName | '017', pDimName | '018', pDimName | '019', pDimName | '020', pDimName | '021', pDimName | '022', pDimName | '023', pDimName | '024', pDimName | '025', pDimName | '026', pDimName | '027', pDimName | '028', pDimName | '029', pDimName | '030', pDimName | '031', pDimName | '032', pDimName | '033', pDimName | '034', pDimName | '035', pDimName | '036', pDimName | '037', pDimName | '038', pDimName | '039', pDimName | '040', pDimName | '041', pDimName | '042', pDimName | '043', pDimName | '044', pDimName | '045', pDimName | '046', pDimName | '047', pDimName | '048', pDimName | '049', pDimName | '050', pDimName | '051', pDimName | '052', pDimName | '053', pDimName | '054', pDimName | '055', pDimName | '056', pDimName | '057
', pDimName | '058', pDimName | '059', pDimName | '060', pDimName | '061', pDimName | '062', pDimName | '063', pDimName | '064', pDimName | '065', pDimName | '066', pDimName | '067', pDimName | '068', pDimName | '069', pDimName | '070', pDimName | '071', pDimName | '072', pDimName | '073', pDimName | '074', pDimName | '075', pDimName | '076', pDimName | '077', pDimName | '078', pDimName | '079', pDimName | '080', pDimName | '081', pDimName | '082', pDimName | '083', pDimName | '084', pDimName | '085', pDimName | '086', pDimName | '087', pDimName | '088', pDimName | '089', pDimName | '090', pDimName | '091', pDimName | '092', pDimName | '093',
pDimName | '094', pDimName | '095', pDimName | '096', pDimName | '097', pDimName | '098', pDimName | '099', pDimName | '100', pDimName | '101', pDimName | '102', pDimName | '103', pDimName | '104', pDimName | '105', pDimName | '106', pDimName | '107', pDimName | '108', pDimName | '109', pDimName | '110', pDimName | '111', pDimName | '112', pDimName | '113', pDimName | '114', pDimName | '115', pDimName | '116', pDimName | '117', pDimName | '118', pDimName | '119', pDimName | '120', pDimName | '121', pDimName | '122', pDimName | '123', pDimName | '124', pDimName | '125', pDimName | '126', pDimName | '127');
ViewCreate (vs_CubeName, 'All');
  • Save and run the TI Process.  You will now see the cube name you supplied in parameter created and has 127 dimensions
  • Map the data source of the process to use a view on this 127 dimension cube
To do this, open the ExportToSPSS process in Architect.  Click Ok on any errors you get.  In DataSource Tab, click on Browse and select the cube you created and choose "All" as the view

After this, click on the Variables tab, you will be prompted with couple of options.  Choose the one highlighted

You can now go to the Data Tab, do the modification as recommended in the KB Article.  Comment out the old line and add the new line and save the process.  Ensure that preview in the IBM Cognos TM1 Import Palette runs correctly.
# vValue = if(VALUE_IS_STRING = 1, SVALUE, NumberToString(NVALUE));
vValue = if(VALUE_IS_STRING = 1, SVALUE, NumberToStringEx(NVALUE,'#####0.0##', '.', ','));

Closing Thoughts

  • If you are on IBM SPSS Modeler Ver 17, you will not face the issue of last digit being dropped
  • Modifying the code through TM1 Performance Modeler is fastest; but it increases the file size by 2.5 times.  We will need a file compare utility to figure out what has been additionally added by TM1 Performance Modeler
  • Using TM1 Architect is clean, however you will need to follow certain steps, before you are ready to make the change
  • In Ver 17, the file is named slightly different and the code difference between ver 16 and 17 is huge.  See below screen

  • As mentioned in my last blog, if you upgrade to a newer version of SPSS, there is need to copy these 3 files all over again to the various DATA directory of the TM1 server you are working with

Sunday, June 21, 2015

Read IBM Cognos TM1 Data in IBM SPSS Modeler

One of the new feature added in SPSS Modeler 16, is the support of reading data from IBM Cognos TM1 and exporting data to it.  In this post we will look at:
  1. Prerequisite for reading data from IBM Cognos TM1 into IBM SPSS Modeler
  2. Reading data from IBM Cognos TM1 cube into IBM SPSS Modeler
We can therefore bring in the Enterprise Planning (or other data from IBM Cognos TM1) functionality into the Predictive Analytics capabilities of SPSS.  This brings in synergy between the two products.

Prerequisites

  1. SPSS Modeler must be ver 16 or above and IBM Cognos TM1 needs to be 10.2 or higher
  2. Before you begin working with IBM Cognos TM1 in SPSS Modeler (either for import or export), you will need to copy following three processes (.pro files) from IBM SPSS Modeler <Modeler install directory>/ext/bin/pasw.tm1/scripts to the DATA directory of your IBM Cognos TM1 server.  To do this, stop the Cognos TM1 service, copy these files and start the Cognos TM1 service
    • ExportToSPSS.pro
    • ImportFromSPSS.pro
    • SPSSCreateNewMeasures.pro
  3. If you have multiple TM1 servers on a given TM1 Admin Host, then you'd need to copy these 3 .pro files to all those DATA directories, with which you want to connect in IBM SPSS Modeler
  4. If you have done TM1 integration before applying IBM SPSS Modeler 16.0 FP1, then after FP1 is applied, you will need to copy the 3 .pro files all over again ... i.e. you will need to repeat the step #2.. Please read this IBM KB article
  5. Measure dimension needs to be defined on the IBM CognosTM1 cube, from which you are importing data.  To do this, right click on the cube, choose properties and pick the measure dimension
    • There must be at least one public view defined on the cube.  Private views cannot be read from IBM SPSS Modeler
  6. Per IBM Knowledge Center, the data that needs to be read must be in UTF 8 format

Read data from IBM Cognos TM1

Please take few minutes to read and follow the prerequisites.  After you have done, you are all set to get data from powerful in-memory database into your predictive analytics tool.

In your stream, click on the Sources and place the IBM Cognos TM1 Import palette on your canvas.  Double click on the palette and set its properties.  First enter the PM hub URL and click on Connect

In the TM1 Server drop down there will be list of TM1 servers, pick the one where the cube resides.  For our test purposes, we will use Planning Sample.  Click on Login, enter the credentials and click Ok.  Notice the list of IBM Cognos TM1 Cubes in shown in SPSS as well as in Architect.



A test cube, which I created "SK_Cube1" is missing in the SPSS list.  There is no measure dimension defined for this cube.  Consequently it does NOT appear in the cube list within SPSS Modeler (as mentioned in #5 prerequisite).  We will go ahead and define a measure for this cube.  It can be either done by right clicking on the cube or entering the suitable text in }CubeProperties cube.  Once you are done, click on refresh in SPSS Modeler



Take a look at the screen below.
  • In the blue box on right side, it shows the views available on this cube as well as the sample data entered from TM1 Architect.  Observe that there are Public Views available.  "View 1" is a private view
  • In the red box on left side, there are views listed under the cube in SPSS Modeler.  Since there are no public views available, you cannot work with this TM1 cube, yet

We will open the private view and save it as Public View.  Then click refresh in SPSS Modeler.  Select the view and click on Import.  Then click on Preview, to see the sample data imported.  Voila!

       
When you hit the Preview button, it is at this stage "ExportoSPSS" process is executed.  If this process is not present in your TM1 server, you will get error right away.

Wednesday, June 17, 2015

IBM Cognos TM1 and Stored Procedures


We often use ODBC as Datasource Type in a Turbo Integrator (TI) process and load data into IBM Cognos TM1 cubes or build metadata.  In majority of the cases, we end up writing SQL code in Query box, to fetch the information that is required to perform the intended task.  Once in a while, we are required to work with a Stored Procedure (SP) in IBM Cognos TM1.

We will  look at how to execute an SP from TI. I will be using DB2' sample database to demonstrate the example.  SP name is TM1SP1 and its body is as follows.  A very simple SP, which accepts department number as a parameter, it then brings back the name and manager number associated with that department.

CREATE OR REPLACE PROCEDURE TM1.TM1SP1 (IN SrchNo char (3))
    DYNAMIC RESULT SETS 1
    LANGUAGE SQL
BEGIN
    DECLARE CUR1 CURSOR WITH RETURN TO CALLER FOR
        SELECT DeptNo, DeptName, MgrNo FROM TM1.DEPARTMENT WHERE DeptNo = SrchNo;
    OPEN CUR1;
END


Create a new TI process using ODBC as datasource type.  Choose the appropriate DSN, key in username and password.


For MS SQL if single-signon is enabled then, there is no need to key the username and password.  In our case, DB2 is not configured.  Therefore a named user and password is required to connect to the database.


 To run a stored procedure in DB2, you will need to use CALL function.  In our case, to run the TM1SP1 stored proc, the command will be 
In MS SQL or Oracle, you will use Execute SP_Name to run a SP.  Alternately you could abbreviate the Execute command to Exec and it will still work.

The preview pane of the TI task will now show the output of the stored procedure.  The test SP was written to return 3 columns from the Department table.  Preview pane in the TI window will show these 3 columns.

You can now switch to the Variable tab in TI window and work forward from there.

So far so good. This method will work.  As long as the stored procedure does NOT modify the data (i.e. there are no insert/update/delete statements in it).

Let's move on to Employee table in our sample database and look at Salary of Ted

SELECT EmpNo, FirstNme, LastName, Salary FROM TM1.Employee where EmpNo = '000100'; 
We will write an SP that increments salary of an employee by 1,000

CREATE OR REPLACE PROCEDURE TM1.TM1SP2 (IN SrchNo char (6))
    DYNAMIC RESULT SETS 1
    LANGUAGE SQL
BEGIN
    UPDATE TM1.Employee
    SET Salary = Salary + 1000
    WHERE EmpNo = SrchNo;
END

Every time a call to the SP is made, it increments the salary of employee number (passed as parameter) by 1,000.  Going by the above steps, if we end up creating a SP and write our TI as below
When you hit preview pane, you will see the results as below:

Ted's salary is already incremented by 1,000 to 87,150.  Let's proceed to variables tab and define the available 4 variables as Other.  In data tab, let's put this statement

AsciiOutput (GetProcessErrorFileDirectory | 'sk.txt', EmpNo, FirstNme, LastName, NumbertoString (salary));


Save the process and run it.  After the process is run, open the text file - sk.txt in our case.  Salary of Ted is incremented by another 1,000 to 88,150!


While the intention was to run the SP once and increment the Salary for given employee, by 1,000.  Net effect is that Salary is now incremented by 2,000.  Another unintended consequence of this approach is that, if someone just opens this TI process in TM1 Architect, then also the Salary is incremented!

Therefore, if the SP we are working with modifies the data, then it is better to run it in the prolog by using TI function DatasourceQuery.  

In our example, while we want to run the Stored Procedure in Prolog of Cognos TM1's TI, in the initial query pane, there is a need to have a valid SQL.  Since the Stored Procedure returns 4 columns, we will write out a sql that gets 4 columns but does nothing.


 When you hit Preview this is what is seen
Prolog tab will have the following code.
vs_SQL = 'Call TM1.TM1SP2 (''000100'')';
DatasourceQuery = vs_SQL;

Let the AsciiOutput statement remain as-is in the Data tab.  Set Ted's Salary back to 86,150.  

From TM1 Architect, run the modified TI process.  If we check the output file as well as the DB2 Table contents, we will now see that Salary is incremented by 1,000 alone.  Even if someone repeatedly opens the TI, the Salary is not changed - unless the process is run.

Summary:

  • To run a stored procedure from a TI in IBM Cognos TM1, we can write the statement in the query pane.  This is ok as long as the SP fetches the data
  • If however, the SP we want to work with modifies the data in some fashion (be it insert, update, delete or some combination of these) then writing the statement in the query pane, will lead to double/triple/quadruple/multiple executions, causing lot of grief