Guest post by Mario Daigle, Senior Product Manager, Cognos Platform, IBM Business Analytics
Follow Mario on Twitter: @mariodaigle
There’s quite a buzz in the industry that in-database analytics is a thing of the past, and in-memory, or caching, is all you need.
Spoiler alert. I disagree.
In-memory is getting a lot of hype these days, and that’s not surprising. RAM is cheap, so it’s become a very viable option to accelerate performance. And, the analytics market is adapting. That’s the natural evolution of things.
Also, it’s new. We like new. I want my iPhone 5!
But, as usual, we’re all over-rotating a little.
Organizations like SAP and QlikTech are doing a great job at messaging their respective in-memory technologies. In my experience as a product manager, the effect of this messaging has framed many of my analytics conversations.
To illustrate, I regularly get asked how we compare to HANA, which is a powerful indicator given that IBM Cognos business intelligence doesn’t compete with database technologies; it sits on top of data sources such as HANA. Other technologies within IBM are more appropriate comparisons, but as soon as we talk about our in-memory technology, the question inevitably comes up. That business intelligence and database technologies get compared because they use caching is a testament to how in-memory has become a topic onto its own.
To be fair, the end goal is similar: make the end user experience fast. It’s that simple. In-memory or caching certainly helps, but does that make in-database analytics obsolete? Hardly.
Companies like Teradata and Tibco Spotfire have been pushing back at the in-memory hype, arguing correctly in my opinion, that data technologies – whether in-memory or in-database – still need to move data across the wire (yet another bottleneck), and that with extreme data volumes, moving the entire data set in-memory isn’t realistic nor cost-effective.
Not to mention the challenge of ensuring a proper fail-safe system of record. In-memory simply isn’t a complete answer in every situation, to say the least.
Acknowledging that we all have our biases, IBM has historically built very flexible architectures. I’m continuously awed by the sheer variety of ways our customers choose to deploy business intelligence systems. Selfishly, I love saying, “Yes, you can do that!”
Our bias, clearly, is to provide flexibility.
With the latest release (v10.2) of the IBM Cognos family, we’ve upheld that principle, while evolving our in-memory strategy in a big way with Dynamic Cubes.
Dynamic Cubes provides flexible in-memory acceleration, but joins forces with the data warehouse instead of trying to replace it; administrators decide whether their Dynamic Cubes leverage in-database or in-memory aggregates (or both!). I’ve written an FAQ on Dynamic Cubes that you can find here.
The truth is that every customer’s story is different.
Some have a choke point at the data tier, so they have to compensate with a beefy memory-rich application tier. For others, it’s the polar opposite.
Sometimes, it’s cost. If customers can boost existing hardware or a virtual machine with enough memory to make a difference and get a real performance boost, they’re ecstatic.
Some have told me in very simple terms that they don’t want to reproduce 20 terabytes of data in-memory, because it already lives in the warehouse. Following the Pareto principle, 20 percent of the data is usually accessed by 80 percent of users. Guess where they want to spend their money?
The ability to tune, optimize, and manage the load between IBM Cognos and the database has turned out to be even more important than I expected.
Data warehouse technologies will continue to evolve, and I’m more convinced than ever that we need to be good infrastructure citizens, and give our customers the flexibility to organize, balance, and evolve their systems.
Database technologies will continue to evolve. In-memory technology will also move forward.
Both in-memory and in-database have their place. Essentially, why be forced to pick a side when you can have the best of both worlds?
I’ll be at Business Analytics Forum (Oct. 21-25 in Las Vegas), possibly wearing a t-shirt that says, “I love the Aggregate Advisor!” I hope to see you there.
For more information:
· Join my Tech Talk on Dynamic Cubes on Oct. 16 at 12:00 p.m. EDT. Register here.
· Learn more about IBM’s flexible business intelligence platform
· Read the whitepaper, “In Memory Processing”