There is a lot of talk about "big" data these days, but in many cases, customers just want to allow their users to perform analysis and reporting on the data they have captured in their data warehouse. We're not talking about data like video or mail, or about HADOOP, or petabytes of data. There is a real need to be able to access the terabytes of data that are captured in the data warehouse and to be able to analyze ALL of it with a set of BI tools.
Dynamic Cubes was designed to address this problem. It allows customers to keep their data in the data warehouse and adds support for aggregate awareness and in-memory caching to accelerate the warehouse. Some PowerCube customers have migrated their data to Dynamic Cubes, resulting in quicker cube build/start times, access to larger volumes of data, and the ability to perform cross-functional area reporting and analysis. The support for relative time functionality on par with Transformer doesn't hurt, either!
Dynamic Cubes has been implemented successfully for applications with both large and small data volumes. And it also supports important features necessary for the enterprise - table-based security, type 2 slowly changing dimensions, named sets, relative time members, and near real time update of fact data.
And you can use it to front end HADOOP (BigInsights or Cloudera) if you want.
On Thursday, August 21 at 10:00 AM ET I will be giving a demonstration of Dynamic Cubes, as outlined below. Hope you can join me!
About IBM Cognos Dynamic Cubes
Dynamic Cubes provides individuals the ability to interactively analyze large volumes of data, while providing access to the underlying detail data, all within a single data container, eliminating the need for IT to develop complex BI deployments. Dynamic Cubes can also merge data from multiple cubes, enabling end user analysis and reporting such as:
Across business areas.
Planning vs. actuals.
Historic vs. recent data.
Dynamic Cubes also promotes a single version of the truth by dynamically retrieving and caching data from the data warehouse.
As data volumes grow, traditional OLAP technologies cannot deal with very large data volumes. This is exhibited by:
Cube build times that preclude timely access to the latest data.
The necessity of summarizing data before a cube is constructed e.g. data at the scope of product line instead of product SKU.
Reducing the scope of the data e.g. reporting on 3 months worth of data instead of the desired 12 months.
Poor query performance as data volumes increase.
IBM Cognos Dynamic Cubes address all of these challenges with its innovative use of in-memory caching and in-database and in-memory aggregate awareness.
Join the AnalyticsZone on August 21 at 10:00 AM ET to learn how you can accelerate your data with IBM Cognos Dynamic Cubes.
In this 30 minute presentation, you’ll see:
The ability of Cognos Dynamic cubes to provide responsive analysis on a data warehouse with a multi-billion row fact table
How to model, manage and optimize Cognos Dynamic Cubes
How the addition of in-database and in-memory aggregates can make dramatic improvements to BI query performance
DATE: Thursday, August 21
TIME: 10:00-10:30am ET
SPEAKER: David Cushing, Product Manager, IBM Cognos Dynamic Cubes
Join the AnalyticsZone for this next installment of the Cognos BI Live! webcast series and learn more about IBM Cognos Dynamic Cubes.