Skip navigation.
spilling the beans

news aggregator

Custom drill paths for Analysis Services reporting

More information per pixel - Tue, 2013-12-03 10:11

Any business report will answer a predefined set of questions, but it will often give rise to many additional queries and chains of thought as the user wants to explore any oddities or anomalies in more depth.

As a report designer you want to provide flexibility in your reports. You may want to create a customised drill down behaviour which you know is how the report users naturally think of the data. Similarly giving users the ability to ‘drill across’ into other hierarchies is a great way of providing chain of thought interactivity, but the sheer complexity and number of hierarchies in many corporate cubes these days means a vanilla ‘drill across’ may be problematic. In the hands of users relatively unfamiliar with the data model drill across can be a shortcut to a support ticket:

  • What is the difference between hierarchy ‘a’ & hierarchy ‘b’?
  • What is hierarchy ‘c’?
  • When I drill across into hierarchy ‘d’ it hangs?
    • that’ll be the 8 million skus in the attribute hierarchy you’ve picked…

We recently worked closely with a customer to implement a solution in XLCubed v7.6 for exactly these types of issue. Many thanks to Thomas Zeutschler at Henkel AG for the inspiration!

Flex Report extends a concept which Henkel had developed in-house to deliver report level flexibility of the drill path, while retaining control over what the user can do. Non-technical report developers can easily define the drilling behaviour for a report (for example from Country -> Promotion -> Product Category), and also to provide controlled ‘Drill Across’ options where the users have a choice of 5 or 6 meaningful levels to drill into, rather than the 200 which may exist in the cube. The difference in usability can be stark, as in the example below.

The end result is report consumers with guided and controlled flexibility in data exploration. This type of reporting can be delivered from a potentially broad group of business users who have the flexibility to develop sophisticated reporting applications without the need to go back to IT each time they want to create a slightly modified drill path. We are finding that both the deliverable reports and the report development process itself resonates with many businesses. Take a look at this video for an introduction: Flex Reporting

TM1 Specialist- Distance:0 miles

TM1 Jobs Pipe - Tue, 2013-12-03 00:00
Title: TM1 Specialist Role: To work as part of a team to build configure and deploy TM1 cubes also analyse, design and process improve a new Cognos TM1 solution Employer: Tier 1 Bank Location: London Rates: £450-£600 per day (*depending on exp) A leading Tier 1 b ...

TM1 Consultant - 50-65k - London- Distance:0 miles

TM1 Jobs Pipe - Mon, 2013-12-02 00:00
TM1 Consultant - 50-65k - London TM1 Consultant - This TM1 Consultant will have five years minimum experience with using IBM Cognos either through internal or external means. The TM1 Consultant will have come from a financial background preferably although technical is always viewed too. My client needs an expe ...

Cognos Developer - SC Cleared- Distance:0 miles

TM1 Jobs Pipe - Mon, 2013-12-02 00:00
Cognos Developer - SC Cleared London - £535 per day 5 month contract Our client is a global consultancy and is looking for a Cognos Developer - SC Cleared to join the team on an initial 6 month contract with a view to extend. Skills: - Cognos Controller - Knowledge of TM1, BI, EPM If interested in this excellent op ...

Cognos Controller- Distance:0 miles

TM1 Jobs Pipe - Mon, 2013-12-02 00:00
Cognos Controller (Cognos, TM1, BI, EPM) Cognos Controller Venn Group is currently seeking an experienced Cognos Controller to work with our client, a leading Consultancy, with their design team in an iterative Cognos development project for this global business and technology leader. ...

Cognos Consultant- Distance:0 miles

TM1 Jobs Pipe - Sun, 2013-12-01 00:00
Cognos Consultant (CPM Consultant) My client, a nationwide organisation are actively looking to recruit a Cognos Performance Management Consultant to join the existing team. Job Summary: To provide support and expertise to super users of Cognos Performance Management To cont ...

TM1 and CAM Failover

TM1 blogs - Wed, 2013-11-13 10:51
If you are using TM1 with CAM security and you have Cognos setup with multiple gateway dispatchers then I would recommend checking out this post on the IBM website:

http://www-01.ibm.com/support/docview.wss?uid=swg21645974&myns=swgimgmt&mynp=OCSS9RXT&mync=E


You may set multiple ServerCAMURI as follow in tm1s.cfg

 ServerCAMURI=http://server1:9300/p2pd/servlet/dispatch
 ServerCAMURI=http://server2:9300/p2pd/servlet/dispatch
 ServerCAMURI=http://server3:9300/p2pd/servlet/dispatch
 ServerCAMURIRetryAttempts=5

Remarks:

ServerCAMURIRetryAttempts specify the retry count for access each dispatcher

TM1 and CAM Failover

James Wakefield blog - Wed, 2013-11-13 10:51
If you are using TM1 with CAM security and you have Cognos setup with multiple gateway dispatchers then I would recommend checking out this post on the IBM website:

http://www-01.ibm.com/support/docview.wss?uid=swg21645974&myns=swgimgmt&mynp=OCSS9RXT&mync=E


You may set multiple ServerCAMURI as follow in tm1s.cfg

 ServerCAMURI=http://server1:9300/p2pd/servlet/dispatch
 ServerCAMURI=http://server2:9300/p2pd/servlet/dispatch
 ServerCAMURI=http://server3:9300/p2pd/servlet/dispatch
 ServerCAMURIRetryAttempts=5

Remarks:

ServerCAMURIRetryAttempts specify the retry count for access each dispatcher

IBM SPSS Modeler v16.0 What’s new

TM1 blogs - Tue, 2013-11-12 05:58

Although it is not officially released yet, a document that explains what’s new has been linked to the internet. You can find it by clicking here.

Some of the new key features that caught my attention are:

  • TM1 Source and Export nodes
    • With the source node you will be able to use cube views and import data into Modeler
    • With the export node you will be able to score data to an existing TM1 cube
  • New R nodes
    • In Modeler 15 FP2 an R node was introduced for modelling, R Model Build/Apply. The new release has two additional nodes, R Process and R Output. With the R Process node, you can use R scripts to do transformations and with the R Output node, you can use R scripts to perform data analysis and produce text and graphical outputs. Also R nodes will be installed as part of the base Modeler installation and not separately.
  • R in database
    • R nodes can be pushed back to some databases offering better performance. Oracle, Netezza and SAP HANA.
  • Monte Carlo Simulation
    • Simulation Source node - generate synthetic data, using a wide selection of statistical distributions
    • Fitting node – can create a preconfigured source node
    • Simulation Evaluation node - terminal node designed to evaluate fields resulting from a simulated analysis stream
  • Python scripting – use python to automates processes. Legacy scripting language will continue to be supported.
  • New Distinct node – Will handle duplicates records more effectively.
  • New Receiver Operator Characteristic (ROC) Evaluation node chart type
  • Area Under the Curve (AUC) and Gini metrics in Analysis node

 

Powershell with TM1Server.log

TM1 blogs - Tue, 2013-11-12 00:10
The tm1server.log is not the easiest thing to extract information from but contains golden nuggets of information.

Ideally you want to timestamp the tm1server.log after any restarts and then you can analyze a clean single session of information.

One of the easiest things you can do is use Windows powershell to then extract just the messages you want.

e.g. if i just want to explore the cube dependencies the server is producing then I just the following line within powershell

Get-Content .\tm1server.log | Where-Object {$_ -match 'TM1.Cube.Dependency'} | Set-Content tm1server.log-out.txt

You can alter the above statement to do any filtering you need

Powershell with TM1Server.log

James Wakefield blog - Tue, 2013-11-12 00:10
The tm1server.log is not the easiest thing to extract information from but contains golden nuggets of information.

Ideally you want to timestamp the tm1server.log after any restarts and then you can analyze a clean single session of information.

One of the easiest things you can do is use Windows powershell to then extract just the messages you want.

e.g. if i just want to explore the cube dependencies the server is producing then I just the following line within powershell

Get-Content .\tm1server.log | Where-Object {$_ -match 'TM1.Cube.Dependency'} | Set-Content tm1server.log-out.txt

You can alter the above statement to do any filtering you need

TM1 Cafe

TM1 blogs - Fri, 2013-11-08 09:40
Great overview of TM1 Cafe in 10.2 from Quebit:

https://www.youtube.com/watch?v=4abO-5snYWI

TM1 Cafe

James Wakefield blog - Fri, 2013-11-08 09:40
Great overview of TM1 Cafe in 10.2 from Quebit:

https://www.youtube.com/watch?v=4abO-5snYWI

IBM TM1 Watchdog

TM1 blogs - Thu, 2013-11-07 22:40
Some explanations around how to setup and use watchdog in TM1 10.2

http://www-01.ibm.com/support/docview.wss?uid=swg21654154&myns=swgimgmt&mynp=OCSS9RXT&mync=R

IBM TM1 Watchdog

James Wakefield blog - Thu, 2013-11-07 22:40
Some explanations around how to setup and use watchdog in TM1 10.2

http://www-01.ibm.com/support/docview.wss?uid=swg21654154&myns=swgimgmt&mynp=OCSS9RXT&mync=R

Top 20 Developments in IBM TM1 in recent years

TM1 blogs - Wed, 2013-11-06 12:20
Just realised a presentation that Andrew Stephens and myself did for an IBM Cognos User Group is available for download from the IBM website.

In what will always be a controversial list we called out at the time the Rule function Continue as being a real game changer for TM1 developers.

Those who have been doing TM1 for 10+ years since the days of TM1 714SR2 will remember when there was no Continue function and constant nested Ifs made for ugly looking rule files.

Check it out here:

http://public.dhe.ibm.com/software/au/downloads/Top_20_Developments_in_IBM_TM1_in_recent_years.pdf

Top 20 Developments in IBM TM1 in recent years

James Wakefield blog - Wed, 2013-11-06 12:20
Just realised a presentation that Andrew Stephens and myself did for an IBM Cognos User Group is available for download from the IBM website.

In what will always be a controversial list we called out at the time the Rule function Continue as being a real game changer for TM1 developers.

Those who have been doing TM1 for 10+ years since the days of TM1 714SR2 will remember when there was no Continue function and constant nested Ifs made for ugly looking rule files.

Check it out here:

http://public.dhe.ibm.com/software/au/downloads/Top_20_Developments_in_IBM_TM1_in_recent_years.pdf

How to use the Auto Classifier Node in SPSS Modeler 15

TM1 blogs - Tue, 2013-11-05 21:21

IBM SPSS Modeler V15.0 enables you to build predictive models to solve business issues, quickly and intuitively, without the need for programming.

In this demonstration we are going to show, how you can use the “Auto-Classifier Node”.

The Auto Classifier node can be used for nominal or binary targets. It tests and compares various models in a single run. You can select which algorithms (Decision trees, Neural Networks, KNN, …) you want and even tweak some of the properties for each algorithm so you can run different variations of a single algorithm. It makes it really easy to evaluate all algorithms at once and saves the best models for scoring or further analysis. In the end you can choose which algorithm you want to use for scoring or use them all in an ensemble!

First a brief description of the data. The data comes from the 1994 US Census database. You can find the data here http://archive.ics.uci.edu/ml/datasets/Adult from the UCI Machine Learning Repository. The goal here is to determine whether a person makes over 50K a year. It has 14 variables both categorical and numeric.

First step is to import the data. The data are in csv format so we can use the “Var. File” node to import them. All you have to do is define the source path and we are ready to import the data.

Then we can use the “Data Audit” node to inspect the data. This is one of the most useful nodes of SPSS Modeler. It will display a graph and statistics for all variables and locate if there are missing values or outliers in the data. I am going to write more about this in another tutorial.

Click here to read the full article

 

Syndicate content