Recently, I had the distinct pleasure to deliver
a presentation on Data Storage and Compliance at the IBM Tivoli event 'Business
Without Limits 2012' in Bangalore,
India.There are more than 100 attendees who
attended the event from almost every industry.
My Track for the day: Addressing Data Growth, Threats and Compliance; Unified
The volume, velocity and importance of data have
increased dramatically during the past few years to the point where most backup
and archiving solutions can't keep up with the scalability, functionality,
performance, reliability and budget realities of today and tomorrow. Attendees
understood how to reduce backup data capacity by as much as 95%; how to reduce
the amount of new data at risk by 90% or more; and how to simplify global data
recovery operations and achieve compliance by leveraging a unified management
I was privileged to present in such an interactive
session, where customers understood how our broad product portfolio will help
in addressing their business challenges.
IBM now brings ‘Business Without Limits 2012’ to several cities across the United States
in October and November.This is an
exclusive IBM Tivoli event designed to increase awareness and thought
leadership among the IT managers, Infrastructure leaders, Systems
Administrators, Storage Managers, and Data Center Managers. IBM’s Business
Without Limits event is coming soon to the following cities:
The event will focus on how IBM’s Integrated Service Management strategy
brings together the different capabilities to enable integrated delivery of
business services across complex, interconnected physical and digital
IBM’s Business Without Limits event will have the following Storage Tracks:
the pivotal role of storage in modern data center
backup and unifying recovery
your data protection headaches to the cloud
storage analytic and reporting
This conference will explore how you can capitalize on the opportunities of
a smarter planet and remove the barriers to innovation that will help you
achieve “Business without Limits.”As
today’s leaders are transitioning to smarter, flexible cloud infrastructures
that speed the delivery of innovative products and services, effective storage
management becomes a critical component of that success.Please join us at this event to learn more!
IBM Systems & Software InterConnect 2012 is almost here! Are you registered yet? IBM is inviting more
than 2,500 global leaders in business and technology to attend this
first-of-a-kind, cross-IBM event taking place October 9-11, 2012, at the
Resorts World Sentosa, Singapore.
Join us at InterConnect and you will learn directly from successful IBM
clients, technical decision-makers and industry experts who will share best
practices for achieving your organization’s strategic goals. They'll explain
how they’re fulfilling the vision of their senior leadership – so you can
define a path to turn your opportunities into outcomes.
What is InterConnect really about?Making connections.As an
attendee, you’ll have numerous opportunities to network and meet one-on-one
with your peers, IBM leaders, industry experts and global companies that are
achieving accelerated growth.
You will also have the opportunity to participate in rich "Hot
Topic" sessions hosted by senior IBM thought leaders that showcase
successful business strategies that leverage the breadth and depth of IBM
·Changing the Economics of IT with IBM
·Leveraging Security Intelligence to
Protect Your Most Valuable Corporate Assets
·Rethink IT. Reinvent Business with Cloud
·Transforming Critical Business Processes
·Unlocking Opportunities with Big Data
·Gaining Competitive Advantage Through
·Creating Exceptional Experiences By
Combining Social and Commerce Best Practices
·Speeding Innovation and Extending Reach
·Transforming IT for Insight and Efficiency with Smarter Storage
·Enabling Growth with Critical Information
In the Smarter Storage session, you will have the opportunity to hear
directly from IBM’s General Manager of System Storage & Networking, Brian
Truskowski, as well as Laura Guio, VP & Business Line Executive for IBM
Storage. Attend this session to gain valuable insights from clients who are
transforming their business with IBM Storage solutions, while meeting budget
constraints at the same time.Learn best
practices for optimizing your storage management performance and cost, and find
out how IBM Storage can increase the return on investment of your existing
The Smarter Storage session will also feature key Tivoli storage announcements focused on
storage virtualization and taking your storage to the cloud.You won’t want to miss this!
As an IBM marketing manager, my job includes writing about storage technology.This post is about more than technology, though.It’s about a new breakthrough capability for managing storage costs and service levels.
I recently met with IBM Distinguished Engineer, Mike Sylvia, who has been working on a Business Transformation project to enable automated right tiering for storage in IBM data centers.Right tiering is the notion that data should be hosted on the optimal storage tier to balance cost and performance requirements.
Mike explained that applications tend to be hosted on top tier storage.When he analyzed actual usage patterns, Mike found most data can be effectively hosted on lower cost storage.Mike’s project put numbers to a problem that is often hidden from view and, until now, nearly impossible to solve.
Hosting data on the wrong storage tier turns out to be a huge efficiency problem.Mike predicts IBM will save $13 million over 3 years in one data center, by periodically moving data to the right tier.During the pilot, users saw their cost for storage drop by 50% per TB on average.This is big.
Like many advancements, IBM’s automated right tiering capability is accomplished by integrating existing technology.Mike Sylvia’s project combines storage virtualization, storage management automation and analytics.Today, IBM offers the technology in a bundled solution called SmartCloud Virtual Storage Center.
How does it work?
Step 1.IBM’s storage virtualization controller collects detailed usage metrics about storage it manages throughout the data center, without impacting application performance.
Step 2.IBM’s Storage Analytics Engine studies usage patterns over time to understand performance requirements.
Step 3:Storage tier recommendations are generated in reports that can be shared with application owners and IT management.
Step 4:Storage virtualization enables online data migration, with no disruption to applications or users.
Repeat:Usage patterns change over time, of course, so right tiering becomes an ongoing process.
Why does it work?
Automated right tiering delivers the efficiency benefits of Information Lifecycle Management without the headaches and hidden costs.Automated right tiering has significant benefits for both data owners and IT leaders, so everyone wins.
For example, application and database owners can gain the following benefits:
Applications can move to top tier storage when they need it, without waiting for a maintenance window.
Average storage costs drop significantly, without a drop in services.
IT leaders benefit, too.For example:
Storage tier decisions are based on analysis of actual usage patterns, not predictions.Storage performance management tasks are eliminated.
Data can quickly and easily be moved back to its original storage tier if requested, without incurring an outage.
IBM automated right tiering works with most storage systems, so deployment is nondisruptive.
The technology that enables automated right tiering has significant additional benefits, such as the ability to eliminate scheduled outages for storage system maintenance.
Problem solved.How has your organization addressed the storage right tiering challenge?
Backup 1000 virtual machines in less than 36 minutes
Is it possible to achieve higher levels of data protection, recovery and availability for the virtualized systems than in your non-virtualized environment? Yes, is what you will hear from IBM and VMware at VMworld® 2012.
We’re all aware of how data protection and recovery has only gotten more complicated with the explosive growth in server virtualization. Applications running on virtual environments are becoming even more critical for business success, and the data volumes residing in virtual environments are growing in leaps and bounds. Also, given the shared physical resources of ESX/ESXi systems, a smarter approach to I/O intensive processes such as backup and recovery are needed.
As a virtual environment user, if you’re looking for faster backup and restore for the VMware datastores, then IBM Tivoli Storage FlashCopy Manager Version 3.1 (referred to as FCM 3.1) is the answer. It is designed to handle high demands - but how quick can it be?
Seeking to expunge all doubts, and recognizing the demand for efficient and fast backup of virtual machine data residing in VMware datastores, the Tivoli Storage Manager performance team carried out a benchmark test.
To assess how fast FCM 3.1 can meet the data protection demands from VMware virtual environments, the team conducted tests on VMware environments that have up to 1000 on-line virtual machines with a total capacity of 18 TB of disk space.
The results were astounding. Test results for up to 1000 virtual machines (the maximum tested) showed that FlashCopy backup elapsed time increases linearly with the number of virtual machines: * 500 virtual machines can be backed up by FCM 3.1 in 15 minutes
* 1000 virtual machines can be backed up by FCM 3.1 in 36 minutes
So how is this useful to you? FCM 3.1 provides:
* Simplified deployment and management of advanced, application-aware data backup * Improved backup and recovery times from hours to minutes * Improved administrative productivity by simplified management and automation of routine tasks
IBM Tivoli Storage Software ROCKS for the 6th Straight Quarter
IBM announced its second quarter earnings yesterday, 18 Jul 12, showing great results in very a difficult macro economic climate. In his remarks on the investor call, IBM Chief Financial Officer Mark Loughridge specifically called out Tivoli Storage as a strong contributor to these results:
"In software this quarter, we had good growth in our business analytics and storage management offerings ... Tivoli software was up 6 percent at constant currency and gained share, driven by storage software growth of 13 percent at constant currency. Tivoli Storage Management continues to perform exceptionally well, growing double digits at constant currency for the sixth consecutive quarter ... Storage hardware revenue was flat at constant currency, with the value continuing to shift to software, as you saw with the ongoing success we’re having in our Tivoli storage software offerings."
The success of Tivoli Storage Software is largely the result of the strong growth of the Tivoli Storage Manager family, our flagship data backup and unified recovery management platform. This is obviously a very competitive and highly saturated market segment - everybody has a backup solution in place - and it's a segment that is projected to grow only 8% this year. So why is Tivoli Storage Manager (TSM) doing so well? We believe its because the requirements for data protection have changed dramatically over the past 3 to 5 years, and we've made the improvements to the product necessary to meet and stay ahead of those challenges.
1. Data is growing at 40% - 60% per year, and legacy backup software, especially those solutions that rely on adding media servers as they scale, cannot keep up with this growth. TSM has seen an 800% increase in it's scalability since 2009, now supporting up to 4 billion data objects in a single TSM server.
2. IT environments have become increasing complex, and important data assets are now created and managed in more places than ever before. Many companies have deployed a wide range of point solutions to handle the different requirements of key applications, virtual machines, remote offices, employee workstations, disaster recovery, etc., but this only brings in more cost and complexity to the point where you lose visibility and control of the overall data protection infrastructure. TSM, however, offers a true Unified Recovery Management platform that ties all of these advanced technologies together in a single user interface, making it easy to ensure everything is protected and can be recovered quickly when something goes wrong.
3. Costs are out of control - mostly due to points 1 and 2 above. Backup is really just an insurance policy - it doesn't add anything to the top line, so you want to spend as little on it as possible. To address this, we've added tons of new capabilities into TSM at no additional cost, including built-in data deduplication and off-site replication, and we've introduced a back-end capacity pricing model that encourages you to take advantage of TSM's outstanding data reduction and data lifecycle management capabilities to further reduce your overall cost of ownership.
We are seeing many customers consolidate their diverse backup infrastructures onto the TSM platform, and we are also seeing success with Managed Service Providers (MSPs), some through the "Backed Up by IBM TSM" partner program, leveraging TSM to offer enterprise-class data protection services to small and medium size businesses and agencies that could not otherwise afford this level of protection.
To learn more about how Tivoli Storage Manager can improve the performance and efficiency of your IT environment, please contact your local IBM rep or Business Partner. We would love to have you be part of our continuing growth and success.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
IDC has recently released its Worldwide Storage Software QView for the first quarter of 2012. In it, IDC estimates that the total Storage Software market for 1Q12 grew about 3.3% over 1Q11.IBM had a solid quarter while Symantec faltered, allowing IBM to take the overall #2 share rank position for 1Q12.
§In the Overall Storage Software Market, IBM moved up to #2 share rank position in 1Q12, gaining 2.0 share points over 1Q11.
§IBM retained its #1 position for Archiving Software growing faster than the market.HP holds #2 spot with its 2011 acquisition of Autonomy.
IBM offers a comprehensive, flexible storage management software portfolio that helps organizations address storage management challenges across the enterprise, including data centers, remote/branch offices and desktop/laptop computers. Learn more about the specific components within the IBM storage software family that can help you create a more responsive and resilient storage infrastructure for your on demand business.
Most compelling in this announcement is the continued adoption of the XIV user experience across the IBM storage family. When IBM acquired XIV in 2008, it brought along arguably the coolest user interface in the storage market. It is simple and intuitive, easy to navigate, and yet provides powerful levels of visibility and control of the storage environment.
Here’s what the new TPC V5.1 main dashboard looks like:
But what was most exciting for me, as the product marketing manager for the IBM Tivoli Storage Manager family (http://www-01.ibm.com/software/tivoli/products/storage-mgr/productline/) was the “Statement of Direction” that was included in the TPC V5.1 Announcement Letter. In it, IBM states that it intends to adapt this advanced administration GUI for use with IBM Tivoli Storage Manager. Woo Hoo!!
This intuitive GUI approach is already being used across the IBM storage portfolio of software and systems: the new Tivoli Storage Productivity Center, IBM System Storage SAN Volume Controller, IBM XIV Storage System, IBM Storwize V7000 Unified, and IBM Scale Out Network Attached Storage (SoNAS). Customers can leverage this user interface consistency to simplify the management of various systems within their data center such as unified recovery, enterprise storage administration, and individual storage systems.
Tens of thousands of Tivoli Storage Manager (TSM) administrators have “learned” to love TSM’s command line interface, and it will continue to be the fastest, most powerful approach to managing your TSM environment. But to attract a new generation of users, and to expand our market into new areas, we believe that this new user interface will be another huge step forward.
Starting with the release of TSM V6.1 in 2009, our developers have continued to make significant improvements to the daily lives of TSM administrators, including (but not limited to):
Replaced TSM’s internal relational database with a full DB2 implementation, reducing overall TSM administration time by as much as 40% and enabling massive scalability
Improved reliability, performance and availability
Automatic push of client software updates (80% time savings)
Integrated reporting and monitoring, with Cognos Business Intelligence Reporting tools to help generate custom reports faster
Unified Recovery Management to manage the entire distributed backup/recovery infrastructure, from mainframe to laptop, from a single admin interface
A new capacity-based licensing model that eliminates the need to count TSM PVUs (processor value units)
The thought of a new Tivoli Storage Manager Admin Console, leveraging the success seen across the storage hardware family (and now TPC as well) and providing a common look-and-feel across our hardware and software offerings, is something to get really excited about. I can’t tell you when it’s coming, and we can’t make any promises that it’s coming at all, but I’m pumped.
The plan will be to roll out functionality in the new UI incrementally. If there are any specific things, beyond the obvious basic functions, that you would like to see in the first release, please drop me a note at email@example.com and I will forward it to Product Management.
Disclaimer: IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Mike Griese, TPC Product Manager, presented Tivoli Storage Productivity Center v5.1 to a huge gathering at IBM Edge on the opening day. The video is now available on Youtube. ___________________________________________________________ To view more videos from IBM Edge, visit: http://www.youtube.com/user/IBMEDGE2012 ___________________________________________________________________________________________________
Posted on behalf of Martine Wedlake, Ph.D., Storage UI Architect, IBM Software Group
From talking with customers, we know that it's really important that you find what you need quickly and easily. The original navigation structure for IBM Tivoli Storage Productivity Center (TPC) was built around a resource explorer model -- very much like a windows file explorer. This unfortunately, means you can have a ton of entries in the navigation that you'll need to hunt through to find anything.
For example, I took a look at one of our TPC deployments in the lab and started counting the number of clickable entries in the navigation -- I stopped counting once I got to 1000. Based on how far I got through, I'd say that there were about 1500 or so. I should point out, that this is not a particularly large deployment -- 25 storage systems, 7 servers, 4 hypervisors, 5 fabrics with 46 switches. You can expect a much larger set of entries on larger deployments.
So, we knew pretty early on that we needed to improve the navigation. To do that we switched from a resource explorer view to a by-category view. This allowed us to dramatically simplify the navigation to only 13 high-level categories and no more than two levels deep. No more hunting and pecking to find what you want!
We also made it possible to directly link to the things you want without having to go through the navigation at all. For example, for an SVC storage system's detail page you can link directly to the set of backend controllers in your environment consuming the storage. You don't need to go back out to the navigation menu and then try to track down the servers all over again. Here’s a picture:
The overall concept is whenever you see something interesting; you should be able to drill-down into it. In addition to the navigation of the product, we've spent considerable effort to make the content of the user interface easier and more intuitive; and to make it consistent with the work we had done previously on the Storwize V7000 and SAN Volume Controller user interfaces – if you've seen one of our GUIs, you'll be able to get up to speed quickly on any of the others.
To that end, we borrowed significantly from the Storwize V7000 GUI, for example: configurable tables, visual theme, embedded help system, charting and general icons. Here’s a screenshot of Storwize V7000 GUI to help show the similarities:
Beyond these cosmetic enhancements, we spent a lot of time working with our stakeholders to deliver the content in an intuitive and simplified way. Knowing what to put on the pages and how to simplify the pages involved a dramatic shift in our development process. But, before I move on to that, I really need to highlight the improvements made with reporting.
In this release, we've embraced the Tivoli Common Reporting which includes IBM Cognos. This is a huge step forward for improving your ability to view and create reports for TPC.
To start with, you will not need to know SQL or database schema to create reports -- the drag and drop interface allows you to simply incorporate the data columns you wish to show and Cognos already understands the relationships between the entities. For example, let's say you want to show the volumes connected to a given server. In Cognos, you simply add columns for the Server Name and the Storage Volume into the canvas. The tool already understands the relationships between these entities and will automatically join the data appropriately to show which volumes are mapped to which servers.
Of course, we also provide upwards of 45 or so reports out of the box for those who don't wish to create reports themselves. Another neat feature is that the reports included with TPC can be copied and edited by the built-in editor tool within Cognos, so you can take one of our reports and modify it to your liking. Here are some of the reports that are included with TPC:
Working with our customers is part of the most rewarding aspects of my work here at IBM. For this release of TPC, we employed a radically different development model from what we have done in the past. We like many in the industry used to develop in a methodology called waterfall where requirements are captured and approved at the beginning of the project which leads to a phase of high-level and low-level designs, leading eventually to development, test and delivery to customer. For this release of TPC, we wanted to include customer input throughout the development cycle -- not just at the beginning when collecting requirements.
As such, we hosted 17 sessions with 34 customers, 7 business partners and 4 internal customers spread throughout the development cycle (held monthly). We also sent developers and GUI designers out into the field to talk directly with 7 customers. From these combined sessions, we captured 261 distinct requirements and were able to contain 188 of them within the first phase of development, with 47 being deferred into the second phase. That means that 72% of the requirements are already implemented in the first phase alone, and 90% of the requirements are expected to be implemented by the second phase. This is very impressive, as compared against traditional waterfall development.
The best part of an iterative, agile approach is that we are constantly evaluating the effectiveness of the solutions. We learn right away if something isn't quite hitting the mark, and have plenty of time to make changes to improve it.
As a quick plug, it is not too late to participate in our Early Adopter Program for the next phase of TPC. Please feel free to contact me directly (firstname.lastname@example.org) if you would like to participate. We would love to work with you to make TPC even better.
I am at the Edge Conference this week with my trusty colleague Nathan Smith (@nsmith01tx), the Rich Media Lead for the Tivoli Digital Marketing Team. As two veteran event attendees, it was refreshing to go to Edge and see a new conference put together with such style and aplomb. Edge used to be four different events. This is the first year that they all got pulled together into this inaugural Edge event.
If you are thinking about jetting out here to catch the last three days don't bother. The conference is sold out. However, you can catch the general sessions on LiveStream. If you jump on twitter while you're watching, its almost as good as being there. Use the conference hashtag #ibmedge to join in the conversation or to listen to the backchannel as it happens. Some of the folks who are out and about are Jon Toigo (@jontoigo), Chris O'Connor (@ChrisTheAnalyst), Ray Luccesi (@RayLucchesi), Al Hollingsworth (@AlHollingsworth) among others. Mary Hall has an excellent who to watch and follow post on influential bloggers and tweeters in the Storage space.
I saw Jon wondering around today and plan to meet him. Alex, from Emulex can also be found at the
SocialEdge area along with @staceytabor and the Baptie group. In fact,
go to the SocialEdge area and ask @staceytabor about the #storagebeers
tweetup planned for Wed at 5pm. It's invitation only, but tell Stacey
that @brenny sent you.
I am not going to write about the general session because Tony Pearson has an excellent writeup of it here.
However I will mention that the real time compression for active data announcement got a lot of attention in the backchannel.
I 'd write about the TPC 5.1 release, but Amalore Jude has done a fine job of it here. In fact, if you are looking for news about the TPC 5.1 announce and features, its well worth reading Jude's other posts on the Storage blog. For those in attendance at Edge, there are some great sessions that lay out the TPC 5.1 features and benefits. Gary Fry has one such session where he shares his experiences as a beta tester of the new version.
For those not in attendance, the Tivoli User Community (TUC) is hosting a webcast on the TPC 5.1 release and details. If you are not yet a TUC member then its definitely worth checking out. Last I heard the membership stands at over 20K strong.
On the Social Media front, its great to see a set of very strong bloggers and analysts in attendance and blogging as well as tweeting the event. It seems many have read about the Social Media plans for Edge and are taking advantage of them.
On the lighter side, it was a nice surprise to see an IBM Conference open up with Led Zeppelin's Kashmir, played by Bella Electric Strings.
Comedian Don McMillan (@DonMcMillan) held court during the general session and was a hoot as always. All of these are available for view on replay on the LiveStream channel
And finally, it is a nice bonus to have the conference at a Waldorf. It's the one place where you'd imagine that even conference food would taste good. Well, they did not disappoint so far. Yesterday, I enjoyed a vegetarian Paella, with a plantain salad served with baby shrimp and mixed vegetable. I'll let you know what lunch is like on Tuesday.
As a solution marketing professional, I seem to focus on communicating the key features and benefits of my products. In the case of IBM Tivoli Storage Manager (TSM), those things include its scalability, functionality, reliability performance and ability to reduce your costs. However, what we don’t focus on enough, it seems, is the importance of the vendor itself, and its stability, ability to execute, and commitment to provide exceptional customer support.
I was reminded of this by a stream of e-mails originating in South Africa. A large bank there, who we unfortunately cannot name, has been a TSM customer for more than a dozen years. They were recently acquired by a global banking company based in the United Kingdom. In doing its due diligence, the acquiring bank determined that it needed to evaluate some documents that were created, and deleted in the early 2000s.
The South African bank has been keeping periodic copies of its backup tapes, and copies of the TSM database, for long term retention, and had a reasonable expectation that the required documents were somewhere in their stack of tapes. However, they were not following the best practice of transferring the metadata from the database when they upgraded TSM from version to version over the years.
They needed TSM version 4.2, which IBM ended support for in 2002 (ten years ago!). And they needed a version that ran on AIX. Yikes!
The problem was that they needed to create a new TSM Server, using a very old version of the software in order to restore the old TSM database, which would then point them to where the documents were.
Of course, the easy thing to do would be to tell the customer they were out of luck, but that’s not what IBMers do. A worldwide search went out, and one of our long-resident software developers was able to dig out the needed code. The result … the bank was able to retrieve the needed files and completed the acquisition with only a minimal delay.
I came to two conclusions after seeing this story play out. One, you really do need to have a long-term data retention / archive strategy, and follow it. Simply sending backup tapes to a vault is not a viable strategy. You need to worry about how you are to going to restore that data in 10, 20, or more years when all of your IT infrastructure has been refreshed, virtualized, clouded, or whatever comes next. Think about using a content management system rather than your backup software when you need to retain certain information for long periods of time, and plan for periodic migrations of the data to new platforms.
Second, when you fail to follow the advice in ‘one’ above, wouldn’t it be good to have a partner that will go to the ends of the earth to help you out of whatever jam you find yourself in? I know that all vendors aspire to this, and many claim it, but all I can say is that I see it every day at IBM, especially among the Tivoli Storage team. You could do a lot worse. I really enjoy being a part of this team.
I was going to close with a joke about needing to find a player for my stash of 8-track tapes – but that would just be giving away how old I am.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Today (June 4), IBM announces an enhanced Tivoli Storage Productivity Center v5.1 (TPC) that offers superb usability, unmatched reporting and integrated packaging like no other. Customers, sellers and partners are all excited, quite expectedly.
When we previewed the new user interface in Pulse’12, there were many in the audience who wanted to get access to it right away. The new user interface is in line with IBM’s strategy to offer consistent user experience across its major storage offerings – look and feel is great, navigation is breeze and most importantly, quick access to any information from the main dashboard is simply terrific.
The new dashboard view of TPC…
With v5.1, you can access your TPC management console through web. The dashboard not only shows you the capacity and connectivity information, but also details on event alerts, with criticality info, if any.
Entity based views are quite refreshing too. Refer to the sample image below – it shows the overview of a Storwize V7000 system. From this overview screen, you can understand the utilization, activities, data throughput, among many other things.
Click here to watch a short video on ‘TPC’s new user interface’.
TPC is now integrated with IBM Cognos - industry-leading business intelligence software capabilities are now brought to you to manage your storage environment more easily and efficiently. Cognos allows you to simply drag and drop metrics for you to assemble meaningful insights – and interestingly, these do not require advanced skills or writing SQL codes.
A sample report created through Cognos…
Well, now the wait is over. To get access to the new user interface and the Cognos-based reporting, talk to your IBM sales representative or IBM business partner today.
Download TPC data sheet. View the 2012 Gartner Magic Quadrant for Storage Resource Management and SAN Management Software, compliments of IBM, here.
In a previous post, we talked about the recent reviews that the Tivoli Storage Productivity Center (TPC) received. In particular, we are very pleased with the 'Leader' designation received in the recent Gartner Magic Quadrant review for Storage Resource Management.
Its not just the analyst reactions that are positive. Based upon a customer focused feature list, the product team undertook an overhaul of the Graphical User Interface (GUI) and introduced a dashboard that provideseasy touse comprehensive reporting. To ensure they had got it right, the proposed changes were demonstrated on the Expo floor at the Pulse 2012 conference in Las Vegas earlier this year. Responses from the user base were enthusiastic, to the extent that this next iteration is quickly becoming a sought after item.
A beta test program was initiated at the conference, as the true litmus test that the proposed new features would stand the test in a true production environment. Early responses point to some interesting observations. When polled about their experiences with the next evolution of the product, one of the most talked about aspects were features provided to simplify complex reporting. Beta testers derived great time and productivity benefits from having a picture of the full storage environment; something they had to previously go to multiple places together. A common benefit registeredwas time savings when it came to complex reporting.
What is compelling however is the business analytics that this next iteration yields. Tivoli Storage Productivity Center (TPC) provides detailed topology viewsof the entire storage infrastructure. In the overhauled GUI, administrators can observe the overall health of the environment instantly. A simple 'right click' provides detailedviews on each of the storage network entities. The facilitation of these environment wideviews led a beta customer to observe that 'more than just the storage engineer can now get a simple view of their SAN environment'. What does this mean? It means that what started out as a time saver for the practitioners - the storage engineers - now becomes an entryway for the management team to get a quick look at the overall environment, allowing for higher level strategic discussions about storage environments and needs.
Is this good or bad? A recent survey revealed that CMOs will outspend CIOs on IT by 2017. When I tweeted this I was asked by @jamie_joyce why it would take this long. My answer is that its likely due to the classic tension between a cost saving position on infrastructure vs a growth position on Business Analytics or feature offerings. When you think about Big Data within Business Analytics and the proliferation of mobile devices as two huge growth areas, the commonality is a mass proliferation of data in orders of magnitude never imagined. The conversation comes back to storage,and the associated resource management.
Which way does your company lean? Where is your head in that tension between cost savings and growth when it comes to your storage environment?
I chatted with Product Marketing Manager Amalore Jude about this and the kind of reaction the team got at Pulse in Vegas March of this year when they demo'd thenew GUI interface. He was quite pleased with the response. 'Customers were very excited looking at the new, next-generation interface' he told me 'many are awaiting June 4, when they can actually lay their hands on it.'
Well, June 4 is around the corner. If you are a regular reader of this blog, its quite likely I will meet you at the Edge Conference in Florida next week. If you're there, please tweet me @brenny or find me somehow and say hi.
The conference is selling out but there are still passes available for the Tech Edge portion of the four part event. It's not too late to register. The Tech Edge portion is well laid out with over 250 sessions that are being led by IBMers and customers. Sometimes its better to hear the war stories of your peers when you're trying to figure out how to exploit what you have, or are considering getting.
One customer who is speaking is Gary Fry of Unum. His session on March 6, 10-11am in Rm 115 is on Unum's use of the SAN Volume Controller and his experiences beta testing the new evolution of TPC.
So, if you are going, then I hope to see you out there. If you haven't yet decided, then getting a first look at this next evolution of storage infrastructure management is hopefully good motivation to consider it.
Gartner’s Magic Quadrant for SRM and SAN Management software is one of the leading industry publications that provides competitive benchmarking across storage management capabilities and helps support decision making for investments in storage management software. In its latest edition, Gartner positions IBM in the ‘Leader’ quadrant.
IBM Tivoli Storage Productivity Center (TPC) is a clear leader in the SRM market; many enterprises are using TPC today to manage their ever-growing, complex and highly critical storage environments.
TPC is designed to provide comprehensive device management capabilities that include automated system discovery, provisioning, configuration, performance monitoring and replication for storage systems and storage networks. TPC provides storage administrators a simple yet effective way to conduct storage management for multiple storage arrays and SAN fabric components from a single integrated management console.
TPC edges out all other vendors in terms of comprehensively achieving the vision for SRM. TPC provides storage management capabilities that allow administrators to efficiently simplify, centralize, optimize and automate storage management tasks. View the Gartner Magic Quadrant for Storage Resource Management and SAN Management Software, compliments of IBM, here. _________________
If you haven’t unleashed the potential of TPC, watch out for the upcoming version 5.1 release – slated to be announced on June 4, 2012 at the IBM Edge2012. _______________
To learn more, please register for IBM's premier storage conference: IBM Edge2012 being held June 4-8 in Orlando, Florida. This is a 4.5 day conference, 100% focused on IBM storage solutions - with many TPC 5.1 and IBM SmartCloud Virtual Storage Center sessions and customer speakers. Tivoli speakers will be featured throughout the conference and more than 30 sessions will be focused exclusively on Tivoli’s entire suite of products, taught by IBM Distinguished Engineers, leading product experts, clients and partners. Special registration discount applies to all Pulse 2012 attendees! Register here.
Note: This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available here http://www.gartner.com/technology/reprints.do?id=1-1A16V0B&ct=120405&st=sb __________________ Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
IBM Edge2012 is a 4.5-day premier storage event that
brings together innovative IBM technologies, world class training,
leading industry experts, and compelling client success stories and best
practices. With over 250 technical sessions,
, product demonstrations,
hands-on labs, and exhibitions geared to roles spanning from business
leaders to IT practitioners, Edge2012 is dedicated to helping you
design, build and implement efficient storage infrastructure solutions. Tivoli Storage will be featured in over 35 sessions and several product demos at this event. You won't want to miss this!
Edge2012 will feature three separate events:
2-day exclusive event for business and IT executives and leaders,
focusing on transforming storage infrastructures to conquer big data
challenges while achieving superior business outcomes.
4.5-day technical event for IT professionals and practitioners,
featuring cutting-edge education, hands-on labs and on-site
certification geared for all levels
Business Partner Forum:
IBM business partners will learn about the IBM Storage strategy and
roadmap; network with peers and gain valuable insights, contacts and
resources geared toward helping build their business.
IBM System Storage Technical University at IBM Edge2012 (known as Technical Edge) will provide 4.5 days of world-class training with more than 250 technical sessions,
hands-on labs and on-site certification, taught by IBM Distinguished
Engineers, leading product experts, customers and partners. Technical
Edge is sponsored by IBM Partner, Intel. Perhaps the best part about
Technical Edge is the involvement of customers in putting together the
individual sessions. Sharing expert best practices is so important, as
organizations are struggling with monumental data growth which is
outpacing their IT budgets. IBM Technical Edge is specifically
designed to help IBM customers
and partners keep pace with techniques to improve their storage
With shrinking budgets, education and technical expertise become
more important than ever to keep your data center running at maximum
performance. An upcoming IBM Global Data Center Study reveals
that fewer than 1 in 5 data centers are highly efficient. Data centers
that run efficiently can allocate 50% more of their IT budgets to new
With input from IBM Customers and Business Partners, IBM has
developed an agenda that will cover some of the most compelling topics
in IT Storage today. The full agenda of technical classes is attached
to this blog post as a PDF file. Click here to download descriptions of all the sessions.
Attendees will be able choose from a wide variety of sessions which have been arranged into the following tracks:
• Backup / Recovery and Archiving • Big Data and Analytics • Business Continuity • Enterprise Applications • Mainframe Storage Solutions • Product Updates • Security and Compliance • Smarter Storage • Storage Efficiency • Storage Management • Storage Networking • Virtualization and Cloud Storage
IBM Technical Edge is part of IBM's Edge Conference taking place June
4-8th in Orlando, Florida. Storage Community members can save on
registration by registering early, before May 6th.
Today’s general session kicked off a bit later than usual
this week after an evening of rockin out with Maroon 5! The MGM Garden
Arena was wall-to-wall IBMers (with a spattering of party crashers) masquerading
as concert-goers as cameras flashed, video cameras whirled, and everyone
competed (in classic IBM fashion) for the "best Maroon 5" photo
contest. You can check out my Pulse 2012 Storage Management photos
Now, onto today’s General Session, which has been
anticipated all week, due to Steve Wozniak’s appearance onstage with
Grady Booch. More on that later........
First, kicking off the final General Session of Pulse 2012, Erich
Clementi, Senior Vice President of GTS talked about the pressures of a
Smarter Planet: where everything is instrumented, interconnected, and
intelligent. He discussed IBM's SmartCloud platform and provided examples
of how IBM is helping clients get beyond virtualization by offering deployment
choices across private and hybrid clouds, managed services or delivered as
software as a service. He also stressed that to re-think IT and reinvent
your business, you need a trusted partner.
Next up: Helene Armitage, General Manager of
STG, discussed how the consumer data explosion will have a tremendous impact on
systems innovations and how this is driving the infrastructure of the
future. An impactful data point she cited: 80% of people will have mobile
devices in the next 5 years, which has significant implications to how we build
data centers. In this scenario, I especially liked the challenge she posed to
us to assume a leadership role in figuring out the greatest value to maximize
Now, on to Watson.........This presentation, by Manoj
Saxena, General Manager of IBM Watson Solutions, was especially moving as
he discussed the real-life impact that Watson is having in the healthcare
industry: acting almost like a physician’s assistant, and helping in
disease diagnostics. Across so many industries, Watson has been tapped to
address huge challenges that leverage Watson’s analytical technology.
Interesting to note that this technology definitely plays in a Cloud-based IT
As the finale to the three days of Pulse general sessions,
IBM Fellow Grady Booch interviewed Steve Wozniak, Co-Founder of
Apple Computer. Key topics focused on Wozniak’s fascinating life as an
inventor, teacher, and entrepreneur. Such great stories he shared such as the
time he and Steve Jobs used their technology "know-how" to crank call
the Pope. Seriously, though, Wozniak was so passionate about the
importance of educating kids on computers and programming and the meaning of 1s
and 0s, that, after his stint with Apple, he went on to teach 5th
graders for awhile. Scott Hebner joined Grady and Woz on stage to take
questions from Twitter, using the hashtag #askwoz. And, there were some
great ones.......like: "what’s the next killer app, Woz?" How
‘bout Watson for the iPhone?! And, when asked what advice he’d offer
IBM? Woz says: "Stay as a marketing driven company. You
know your customers’ needs, and that is key. I admire that.!"
Thanks, Woz! Will do!
Drumroll, please.........Now on to all the Storage Management
happenings at Pulse, Day 3. There were 2 simultaneous storage sessions
that kicked off this morning: LV
1871 and their virtualization journey, and Hertz Australia’s TSM 6 and TSM SUR
experiences. LV 1871, a German insurance
company, discussed how IBM SAN Volume Controller and Tivoli Storage
have helped them increase its business agility, enable a standardized
management console in the data center, and elevate IT service levels.
Meanwhile, in the next room, Hertz Australia’s Richard Whybrow (with Hertz
mascot Horatio) spoke about Hertz’sTSM 6 experience and how they also had
considered CommVault and NetBackup, but IBM was the most cost effective choice
by far. We like that! As a sidenote, Richard was also the IBM Tivoli User
Group video winner with his video of how he uses TSM at Hertz. Later in
the day, Richard also participated in a customer video interview for us, in
which he re-stated on camera how CommVault and NetBackup was far more expensive
Later in the day, the storage sessions continued with Peer 1
Hosting discussing how they leveraged the data reduction capabilities of TSM to
effectively manage thousands of customers' backup and recovery
environments. Also, the Principal Financial Group reviewed best practices
and capabilities co-developed by IBM and Principal Financial Group, which
enable TSM VE to execute parallel backup and restore operations on multiple
virtual machines simultaneously.
The final storage session of Pulse featured Tivoli BP
Frontsafe discuss the TSM portal cloud management solution, which greatly
maximizes the manageability and effectiveness of your TSM environment,
basically, allowing you to deliver TSM as a Cloud Service. Key benefits
highlighted include: faster way to bring TSM to market with few resources
needed; eliminates the complexity of client-side TSM administration;
easy-to-use daily reporting and support tools; and, lets you set up multiple
layers of distribution (OEM branded all the way down). Also, this
solution makes TSM available to small- and medium-sized companies.
As I wrapped up Pulse 2012 with a few last minute photo
opportunities for Tivoli Storage, and ended the evening with a spectacular meal
at Todd English’s Olives
Restaurant in the
Bellagio with colleagues, I couldn’t help but think "best Pulse
ever," but, I only have 2 under my belt, now, so what do I know?
But, really, how DO we top Maroon 5 AND Steve Wozniak together at a single
Pulse?? I can already hear the creative drumbeat of Pulse 2013 in
the distance now............
ILuminate kicked off the General Session with an innovative,
cool to watch performance – and as Scott said, their performance plus coffee
makes for a wide awake audience!
Steve Mills took the stage – his expertise and client focus
really shine through, but the best part was hearing about how IBM “eats its own
cooking”. He shared about the IT transformation that IBM has undergone, under
Jeannette Horan’s direction, to increase productivity and efficiency while
reducing costs. Did you know that IBM has to manage over 100 petabytes of
production data? How do you think they do that – you’re right, Tivoli Storage
Solutions!!Steve has such a way with
words. I especially loved the sound byte, “Linux runs like a scalded dog on the
Next up was Bob Picciano – he brought up an impressive panel
of customers from Equifax, Rogers, GE and Erie County.
Each had a unique story to tell about working with IBM to optimize their
The Storage sessions today continued to support the main
themes of unified data protection, storage virtualization, and cloud. There were proof points from Bank of China
with their consolidated backup and recovery environment, and Unum who delivered
a presentation around Storage Virtualization using San Volume Controller and Tivoli Productivity
Center. And speaking of
TPC, I hope you made it down to the Solutions Expo to see the new GUI for TPC
(@ Ped 44) that is being tested right now with customers. Butterfly Software also had a session, talking about data
center consolidation – if you haven’t heard what Butterfly can do for you, you
owe it to yourself to learn! In fact, all our partners in the Solutions Expo
have said what a great show this has been for them so far.
Even though these are long days, it’s so good to see
everyone, hear from everyone, and learn so much! And tonight, WE DANCE! Maroon
5 takes the stage this evening, and I think everyone is ready for it! Remember we get to sleep in tomorrow, and hear from Steve
Wozniak, co-founder of Apple Computer, as Day 3 focuses on Innovation.
The Storage Management track at IBM PULSE 2012 kicked off in a big way this afternoon, with a presentation by Laura DuBois, vice president of the storage practice at analyst firm IDC. Laura reviewed where data and storage management technologies have been (focused on the core data center), where they are (spreading to the edge) and where they are going in the future (cloud services).
Ron Riffe, Tivoli Storage Product Manager, then shared IBM's vision for smarter data and storage management in two strategic areas - reducing the percentage of the storage IT budget that is dedicated to managing the copies of data, and controlling the overall cost of storage in the face of continuing data growth and service level expectations.
The first goal is addressed through Unified Recovery -- a path that the Tivoli Storage Manager (TSM) family has been on for the past 2 years and will continue with new capabilities in integrating the complex array of data protection technologies into a single user interface, reducing costs and avoiding serious risks when trying to manage many different point solutions for protecting the data in different types of systems (including virtualized systems), applications and locations.
With the planned enhancements to Unified Recovery, our customers will be able to:
Free IT from the cost and complexity of legacy data management products
Get ahead of runaway data growth and Big Data
Be online quicker following data disasters
Transform your backup infrastructure into a service
Controlling overall storage costs is accomplished by a new solution suite -- the IBM SmartCloud Virtual Storage Center -- a storage hypervisor that virtualizes and manages heterogeneous storage systems. With this set of integrated capabilities, organizations can:
Dramatically improve utilization of their physical storage assets
Deliver tier 1 service regardless of hardware choice
Balance workload, manage lease termination, improve data center performance
Optimize their people for the challenges of day-to-day operations
Following the super crowded (standing room only) kickoff session, we started into a fantastic set of breakout sessions over the remainder of Pulse week. Today's sessions included:
STORServer and their customer, Management Council, Ohio Education Computer Network, described the use of TSM-based STORServer appliances, feeding a into a central TSM-based community cloud, to achieve cost-effective backup and disaster recovery services across the state's education system.
Ricoh Americas detailed how the IBM storage hypervisor helped to streamline their storage management
Chesapeake Energy, a speaker at previous Pulse events, covered the cost and service level improvements that they are experiencing since upgrading their TSM environment to the latest release (version v6.3)
Tomorrow promises to be even more exciting, with a "main tent" demonstration of the SmartCloud Virtual Storage Center in the MGM Grand's Grand Garden Arena, many more informative breakout sessions (including my panel session on Modernizing Data Protection) and ending with a party and concert by Maroon 5.
Additional Related Links: Livestream videos from Pulse (Pulse folder) Tivoli User Group (TUG) Follow Pulse on Twitter with the #ibmpulse hashtag, or our Twitter accounts: @servicemgmt, @ibmtivoli, @ibmpulse, @assetmgmt, @ibmstorage, @ibmsecurity and @ibmcloud Follow our blogs: Service Management, Asset Management, Pulse, Storage
Pre-Pulse Tivoli Storage Management activities kicked off
Saturday, March 3 with the Tivoli Storage Business Partner Summit.We had a strong showing of storage Business
Partners, and the summit was a great way to gear up for Pulse, which kicked off last night with the Grand Opening & Welcome Reception in the Pulse Solution
Expo Hall. Thanks to all our Tivoli Storage Business Partners for attending the Pre-Pulse Tivoli Storage BP Summit and
sharing valuable insights!
During this BP Summit, we
heard from both IBM and Business Partners who covered key topics such as:
-TSM 6 migration with Butterfly
-TSM competitive positioning
-SmartCloud Virtual Storage Center
-STG cloud initiatives & other STG opportunities
-Key trends in the storage marketplace, and
-TMS Suite for Unified Recovery.
You can learn more about these key storage topics in the
Storage Management track, which kicks off today, Monday, at 2PM in Room
117.Speakers include Steve Wojtowecz,
VP of Storage Software Development and Bina Hallman, Director Tivoli Storage
& System z Software Product Manager.Joining these IBM speakers, is IDC Analyst Laura DuBois who will address
key storage trends.Following this
kickoff session, there will be several Storage Management sessions at Pulse in
rooms 115 and 117, as well as key demos in the Solution Expo.More details can be found on the Pulse site
under the Cloud & Data Center Optimization Stream under Storage Management
Track.And, don’t forget to leverage
the SmartSite Agenda Builder
so you don’t miss out on any key Pulse storage sessions!
Continuing the recap of the Saturday BP Storage Sunmit, we
listened to Butterfly Software present their successes with TSM migrations,
leveraging their assessment tool.You
wont’ want to miss their live data migration demo during the Monday storage
Birds of a Feather session #1387 at 6PM in room 117.Special refreshments to be served!
We also heard from Frontsafe, winner of a Business Partner Award
today at the Pulse BP Summit Day.You
can also hear more about Frontsafe’s“Backed by TSM” solution at storage session 1360 on Wednesday, 3:30PM in
room 115.Also, check out Frontsafe’s
Livestream interview on Monday, 2:30PM at the Expo Stage in the Solution Center.You can watch this interview on the Pulse
Livestream channel, as well.Backed up by IBM Tivoli Storage Manager
(TSM) is a unique Ready for IBM Tivoli program classification for clients,
business partners and managed service providers who use the IBM Tivoli Storage
Manager family of offerings as a core component of a data protection and
recovery managed services or cloud-based offering.It is IBM’s new partner program for validated
TSM cloud solutions.Visit the Ready
for Tivoli / Backed up by IBM TSM in the BP Café (part of the Solution Expo Center).
Partnering with Frontsafe, another storage BP, Starfire
Technologies, joined Frontsafe’s BP program recently.You can listen to Richard Spurlock, CEO of
Starfire, speak more about this partnership during his Pulse interview on the
Livestream stage in the Solution Expo.Check it out here:Starfire Technologies Expo Stage Interview.
As Tivoli storage just finished a stellar year
of significant growth in the marketplace, 2012 promises to be another strong
year with continued focus in key growth areas such as:Data Protection & Central Management
& storage hypervisor.Please join us
at Pulse in rooms 115 and 117 in the Conference
Center all week to learn
more! And, keep your eye on the Tivoli Storage Blog here for all Storage at Pulse happenings........
What do you think of when you see the name Riverbed? For those of you not familiar, Riverbed is an IBM partner and the leader in Wide Area Network Optimization. These days, Riverbed offers more than just WAN OP solutions. Riverbed products improve IT infrastructure, speed up application performance, reduce bandwidth utilization, and offer solutions to securely leverage cloud storage. For enterprises looking to implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery, Riverbed delivers optimum performance for globally connected enterprises without compromising the end user experience.
When organizations consolidate IT and move to cloud environments, the distance created between users and their data often results in high-latency and reduced bandwidth. Riverbed WAN optimization, network performance management, and cloud storage solutions enable enterprises to overcome these drawbacks. Riverbed makes it easy to understand, optimize, and accelerate IT, so that organizations can build a fast, fluid, and dynamic IT architecture.
Steelhead® appliances from Riverbed, Virtual Steelhead(TM), and Steelhead Mobile can increase network throughput and application performance by up to 100 times. Riverbed Cascade® provides enterprise-wide network and application visibility and analysis for both enterprise customers and service providers. Riverbed Whitewater® cloud storage gateways revolutionize data protection by leveraging cloud storage. And Stingray Traffic Manager® provides unprecedented scale and flexibility to deliver applications across the widest range of environments. All in all, Riverbed offers end to end solutions to analyze, accelerate and optimize an organization’s IT infrastructure without compromising performance for the end user no matter how far away they reside from the data center.
Stop by the Riverbed booth E105 at IBM PULSE 2012 to see the latest in IT performance solutions.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Riverbed and IBM enjoy a strong partnership which, thanks in part to Riverbed’s Whitewater cloud storage gateways, extends to IBM’s storage management software ecosystem. Whitewater leverages public cloud storage to reduce backup and administration costs, improve disaster recovery readiness and provide secure off-site storage for critical backup data, providing LAN-like access to public cloud storage in a drop-in appliance.
What does this mean for the Riverbed/IBM partnership? A seamless integration with existing IBM Tivoli Storage Manager backup infrastructure and cloud-storage providers, paving the way to extracting more value from existing storage, application and network investments. Tivoli Storage Manager administrators can leverage Whitewater’s local caching and public cloud storage abilities to propel them into the next generation of storage and disaster recovery, leaving classic disk- and tape- based devices (and their operational and maintenance costs) behind. Together, Riverbed and IBM offer a best-of-breed solution which slashes costs and enables almost unlimited scalability, taking full advantage of the flexibility and cost savings offered by storage-cloud services.
Riverbed will be demonstrating how fast it can move TSM data to public cloud storage at IBM Pulse 2012 in Las Vegas, March 4-6. At the show, come by booth E-105 to ask for a Whitewater demonstration and learn more about how Riverbed can optimize and extend your TSM environment as well as accelerate your WAN with the Riverbed Steelhead product family.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Are you going to IBM Pulse 2012, the premier Cloud and IT Optimization event of the year? It’s at the MGM Grand in Las Vegas from March 3 – 7, and we have an awesome agenda with some first class speakers and entertainment.
But this blog is about our storage management software ecosystem partners that will also be attending and lending their support. If you will be at Pulse, please plan to visit with these companies while enjoying the refreshments offered in the Expo Center:
Butterfly Software offers an automated backup and storage assessment tool that can help
you identify problems in your environment and the costs you will likely
incur over the next 3 years; and shows what a new solution based on IBM
technologies will look like and cost. It’s all based on empirical data
that it gleans directly from your systems. Once you’re convinced to move
to IBM, Butterfly also offers non-disruptive migration services. If you
want to learn how much a smarter backup solution will save your
business, please stop by Pedestal # 32 in the IBM SmartCloud Zone, and
learn more at breakout sessions 1387 (5:00 Monday in room 117), and 1384
(3:30 Tuesday in room 117).
Cristie Software is our partner for Bare Machine Recovery solutions for Tivoli Storage Manager (TSM). Hundreds, if not thousands of our customers have deployed these solutions to help restore critical servers quickly. Learn about CBMR and TBMR at booth E-418 and breakout session 1035 (2:00 Tuesday room 117).
Front-Safe A/S provides a Tivoli Storage Manager front-end cloud portal that enables
business partners to offer “backup as a service” to their customers. If
you’re interested in moving to a services model for your backup
environment, please visit pedestal # 45 in the IBM SmartCloud Zone and
breakout session 1360 (3:30 Wednesday, room 115).
Riverbed has a long and broad relationship with IBM. If you want to learn how to extend your TSM environment to the cloud, securely and cost effectively, please stop by booth E-105 and ask for a demonstration of Whitewater. Instead of buying more tapes for your backup archive data, Whitewater helps you move that data to the cloud, where you pay only for the storage you use. Riverbed Steelhead appliances are also used by many IBM customers to speed the movement of data between locations using proven WAN acceleration technologies.
SEPATON was an early entrant into the Virtual Tape Library (VTL) market, and
provides a cost-effective but high performance alternative to magnetic
tape. Visit them at booth E-107. (I might get fired if I didn’t mention that IBM also has an excellent VTL solution, ProtecTIER, that you can learn about at Pedestal # 29 in the IBM SmartCloud Zone).
STORServer offers Tivoli Storage Manager in an integrated appliance, with a simplified, easy to use interface. It’s a great backup and recovery solution for mid-sized organizations and remote offices. Learn more at booth E-408 and at breakout session 1878 (3:30 Monday room 117).
And of course, I’ll be there as well. You can catch me around the storage pedestals in the Expo Center, and at breakout session 2136 (5:00 Tuesday, room 117). I hope to meet many of you there.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Are you going to the IBM PULSE conference (ibm.com/pulse)? I am, and I am hosting a panel discussion on the need to modernize backup and restore capabilities.
Scheduled to join me on the panel are: - Randy Olinger, Director of Enterprise Storage Systems, UnitedHealth Group - Gerardo Colon, Storage Administrator, Adventist Health System
- Peter M. Nielsen, CEO and Founder, Front-Safe S/A
The premise of the panel discussion will be that backup and restore just aren't as easy as they used to be, given the increasing complexity and distribution of IT, the growth of data to unsustainable levels, the pressure to improve service levels by reducing and eliminating downtime, and the need to cut spending. Our panel of experts will share how their organizations are dealing with these and other challenges, and I'm guessing that we'll cover technology solutions such as data deduplication and compression, snapshots and CDP, replication, simplified and unified administration, archiving and data lifecycle management, and how to do all these things while driving down costs.
But that's part of the fun of a panel discussion -- you never really know what you're going to get. It's scheduled for Tuesday afternoon, March 6th at 5:00PM Las Vegas time, in room 117. The session number is 2136. I hope you can make it
Oh - and have you heard - Maroon 5 and iLuminate will be entertaining us during the event; you have to go!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Backed up by IBM
Tivoli Storage Manager (TSM) is a unique Ready for IBM Tivoli program
classification for clients, business partners and managed service providers who
use the IBM Tivoli Storage Manager family of offerings as a core component of a
data protection and recovery managed services or cloud-based offering.It is IBM’s new partner program for validated
TSM cloud solutions.
Achieving Ready for IBM Tivoli software validation shows
customers that your solution meets or exceeds IBM compatibility criteria and
successfully integrates with the IBM Tivoli Storage Manager family of
offerings.Backed up by IBM TSM validation further demonstrates your offering
as being an integral part of a TSM cloud or managed service solution.
Sound interesting?Want to learn more?Then be sure
to stop by one of the following venues while you are at Pulse
2012 for more details about this new program and how you can participate:
Partner Summit – Sunday, March 4 - Information on the program will be
included in all the breakout sessions
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs
nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data
created and used has been in the form of digital data. Today, the world
produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable
than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This
presents a real concern of a “Digital Dark Age” where digital storage
techniques and formats created today may not be viable in the future as the
technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A
storage tool that was so ubiquitous people still click on this enduring icon to
“save” their digital work and any word, presentation or spreadsheet
documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly
denser than they are today. While new form factors such as solid state disks
will help us provide more stable longer-term preservation of data, and the
promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard
drives and solid-state memory to overcome challenges of growing memory demand
and shrinking devices. Called Racetrack memory, this breakthrough could lead to
a new type of data-centric computing that allows massive amounts of stored
information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical
limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide
structure in midst of the data deluge
Now that we have the capability to preserve our digital
universe, we need to find a way to make it useful. We need to take the next
step past data preservation to data curation. Data curation is the active and ongoing management of data
through its lifecycle. This smarter data categorization adds value to data that
will help glean new opportunities, improve the sharing of information and
preserve data for later re-use. Social media is a great example of the power of curated
data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives
and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting,
appraising and organizing data to make them accessible and interpretable. The
key is bringing data sets together, organizing them and linking them to related
documents and tools. If data can be stored in a way that provides context,
organizations can find new and useful ways to use that data.
3) Storage analytics will open
new business insights
With data curation allowing organizations the platform to
better utilize their data, analytics will help turn that data into intelligence
and, ultimately, knowledge. With the information that historical trending analytics
and infrastructure analytics provides, you can index and search in a more
intelligent way than ever before. By doing analytics on stored data, in backup
and archive, you can draw business insight from that data, no matter where it
exists. The application of IBM Watson technology for healthcare
provides a good example. Watson collects data from many sources and is able to
analyze the meaning and context. By processing vast amounts of information and
using analytics, it can suggest options targeted to a patient's circumstances,
can assist decision makers, such as physicians and nurses, in identifying the
most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we
can learn more with the information we have today to improve service to
customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity
– new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain
industries are able to reach new levels of innovation by having the capacity to
house, organize and instantaneously access information. For example, Hollywood is known for its big budget
blockbusters, but it’s the big storage demands required by new formats such as
digital, CGI, 3D and high definition that’s impacting not just the bottom line,
but studios’ ability to produce these types of movies. Data sets for movies
have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs
as just one day’s worth of filming can generate hundreds of terabytes of data.
The popularity of these high data-generating formats means studios are looking
for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger
data dilemma than the entertainment business. Take a look at the Institute
University of Leipzig, in Germany, which has a major genetic study called LIFE
to examine disease in populations. LIFE is cataloging genetic profiles of
several thousand patients to pinpoint gene mutations and specific proteins.
This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of
data per year. Those figures will only grow with higher-resolution medical
imaging, and new tools or services such as making electronic healthcare records
5) Intervention...The Data
In this era of Big Data, more is always better, right? Not
so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too
much time and money collecting useless or bad data, potentially leading to
misguided business decisions. This practice can be changed with simple policy
decisions and implementing existing capabilities in technologies that exist in
smarter storage, but companies are hesitant to delete any data (and many times
duplicate data) due to the fear of needing specific data down the line for
business analytics or compliance purposes. Part of the solution starts with eliminating the copies.
Nearly 75% of the data that exists today is a copy (IDC). By deleting and
disabling redundant information, organizations are investing in data quality
and availability for content that matters to the business. Consider the effect
of unneeded data, costing money by replicating throughout an organization’s information
systems. This outdated data can also potentially be accessed for fraud.
the quality of data is not costly—not getting it right is.
ARE YOU SPEAKING AT PULSE? IF SO, READ ON PLEASE...and book your room at the MGM Grand today to avoid a price increase!
1. Have you uploaded your presentation? The deadline to upload presentations was January 20th to enable appropriate reviews and posting to the Pulse 2012 SmartSite Agenda Builder. Your presentation will be converted to PDF and can be downloaded or printed in advance by attendees, pending your approval. For a full list of presentation guidelines and processes please review the Presentation tab on the online Speaker Kit.
2. Do you know what audio visual equipment will be available in your session room? Click the A/V tab in your online Speaker Kit to review this important information.
3. Are you connected? Follow the conference news & highlights on Twitter or the Pulse blog. Click the Speaker Kit tab to find links and hashtags for use with social media. Find Pulse attendees using the Pulse SmartSite agenda builder.
4. Attendees are always interested in getting to know their speaker! Do you have a bio? Review and update your brief bio by logging onto the Speaker Kit website.
5. Have you started to build your Pulse conference agenda on SmartSite, the attendee conference portal? You will need your conference registration confirmation number to login to this site. Click the Build My Agenda icon to view scheduled sessions.
6. Have you registered for the conference and booked your hotel? Review the registration instructions listed in the registration tab on the speaker kit website.
Very important... Conference hotel accommodations are limited and available on a first-come, first served-basis. Conference rates are valid until January 27, 2012 or until the room block is sold out, whichever comes first.
Please take a few minutes to review the information in your online Speaker Kit, and follow-up on all speaker actions as needed.
If you have any questions or need additional information, please contact the speaker support at PulseSpeaker@experient-inc.com. We look forward to seeing you at the MGM Grand in Las Vegas March 4-7!
IBM has detailed innovative projects and research that show new
storage approaches to support Big Data growth and drive business innovation.
Healthcare, financial services, media and entertainment, and
scientific research among many industries face the challenge of storing and
managing the proliferation of data to extract critical business value. As
storage needs rise dramatically, storage budgets lag, requiring new innovation
and approaches around storing, managing, and protecting Big Data, cloud data,
virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute
for Computational Cosmology (ICC) at Durham University in the U.K. and Business
Partner OCF to build a storage system to better store and manipulate Big Data
for its cosmology research on galaxies. ICC is adopting the same IBM General
Parallel File Systemtechnology used in the
IBM Watson system to store and manage more than one petabyte of data from two
significant projects on galaxy formation and the fate of gas outside of
galaxies. The enhanced storage system will enable up to 50 researchers, working
collaboratively to access and review data simultaneously. It will also help ICC
learn to manage data better, storing only essential data and storing it in the
New Storage Platform Delivers More Personalized, Visual
Healthcare: A medical archiving
solution from IBM Business Partners Avnet Technology Solutions and TeraMedica,
Inc. powered by IBM systems, storage and software gives patients and caregivers
instant access to critical medical data at the point-of-care. Developed in
collaboration with IBM, the medical information management offering can manage
up to 10 million medical images, helping health care practitioners provide
better patient care with greater efficiency and at reduced costs. The
integrated platform allows users to manage and view clinical images originating
from different treatments and providers to bring secure, consistent image
management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a
general medical and surgical hospital in Visalia, Calif., needed to reduce its
operational costs while increasing storage space. To meet these demands, KDHCD
tapped IBM's storage systems to create a new storage platform that reallocates
resources and saves a significant amount of data space with thin-provisioning
technology. Virtualization creates a smaller hardware footprint so the hospital
also saved on power and cooling costs. KDHCD now has a consolidated storage
environment that provides the scalability, ease-of-management, and security to
support critical healthcare data management for the hospital.
Often data center managers find it difficult to accommodate data growth, while maintaining high levels of storage service and availability. In addition to these challenges, new IT initiatives such as virtualization and cloud services introduce additional complexity to already stressed out administrative staff.
IBM's Integrated Service Management solutions can help organizations realize the full potential of their business by providing a holistic approach to delivering and managing IT services. Specifically, IBM Tivoli Storage Productivity Center is designed to equip today’s IT organizations with critical capabilities for visibility, control and automation in the storage
Survey of IT Decision Makers Sheds Light on Need for a New Class of Storage
Late last year, IBM issued survey results that shed light on the storage spending priorities and organizational needs for the near future. Conducted by Zogby International on behalf of IBM, the survey of 255 IT professionals in decision-making positions showed that the majority of respondents (57 percent) agree their organization needs to develop a new storage approach to manage future growth.
The survey underscores the need for a new class of storage that can expand the market for solid-state drives (SSDs) by combining their ability to speed the delivery of data with lower costs and other benefits. Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it. Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. However, the major factor for not using SSD was cost, according to 71 percent of respondents.
To address this issue, IBM Research has been investigating a potential in solid-state breakthrough called “Racetrack memory” that could someday access data significantly faster than hard-disk drives—at the same low cost—and be a successor to flash in handheld devices.
The survey also found that: · Nearly half (43 percent) say they are concerned about managing Big Data. · Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent). · More than a third (38 percent) said their organization’s storage needs are growing primarily to drive business value from data. Adhering to government compliance and regulations that require organizations to store more data for longer -- sometimes up to a decade -- was also a leading factor (29 percent).
· About a third of all respondents (32 percent) say they either plan to switch to more cloud storage in the future or currently use cloud storage.
Organizations are faced with an increasing challenge of storing, analyzing, and protecting ever-expanding data sets that hold significant business value, driving the need for radical new approaches to storage fueled by innovation. Cloud computing, analytics and more advanced storage management technologies will be critical to tapping into that data and turning it into intelligence.
Focused on developing disruptive innovation and pushing the boundaries of data exploration and utilization, IBM Research drives new approaches to managing data, including storage for cloud systems that are geographically dispersed, adding autonomic behavior to storage systems, creating archival systems that prevent a “digital dark age,” and optimizing storage for analytics.
In the last year, IBM Research has recorded a number of storage technology breakthroughs including a 29-gigabit per-square-inch tape demonstration; a world record of scanning 10 billion files in 43 minutes; and, more recently, the creation of a 120-petabyte data system that is roughly 30 times larger than the biggest single data repository on record.
Want to learn more about how HyperIP can help accelerate your data
transfer by as much as 12x? Join NetEx and IBM Tivoli Storage
Software for a webinar on Jan. 25, 1PM EST to hear all about how
pairing Tivoli Storage Manager 6.3 with NetEx HyperIP can help you
achieve this! Register here: http://bit.ly/xQFHdm
Previous blog posts talk about the performance improvement of TSM replication over HyperIP (http://www.netex.com/blog/?p=206).The following chart describes the true performance of replication over HyperIP (data provided by NetEx):
HyperIP enables TSM replication to see near wire speed, over any distance, even over lossy WANs.With HyperIP’s block level compression, throughputs can literally exceed wire speed by as much as 6x; with lossy WANs, over 12x.This means a replication window that moves GB’s of data can be reduced from hours to minutes, without having to increase the bandwidth of the WAN links between remote TSM server nodes.Bandwidth savings alone can return the HyperIP investment in less than 3 months.
For more information, visit http://www.hyperip.com or contact your IBM Business Partner for more information on Tivoli Storage Manager replication over HyperIP.Stay tuned for upcoming co-sponsored webinars with the IBM Tivoli team and NetEx.NetEx is a proud exhibitor at Pulse 2012.
Author: Steve Thompson, NetEx (email@example.com)
In October 2011, IBM added native replication of backup data in Tivoli Storage Manager Extended Edition v6.3 to help customers add "warm standby" disaster recovery capabilities to their unified recovery management platform. This is a powerful new feature that can help reduce the costs of maintaining a separate DR point solution, and simplifies the overall management of the environment.
However, when moving data between physical locations, especially over the long distances desired for a true disaster recovery solution, network latency can become a significant issue. TSM replication is extremely efficient, in that it sends only incremental, deduplicated data between sites. But transfer times can still be impacted by network latency over long distances.
To overcome this problem and provide near native transmission speeds, WAN acceleration solutions such as Netex HyperIP can be deployed.
Netex recently completed testing of their solution with the new TSM replication feature and found that it can accelerate data transfer by as much as 6 times, or 12 times with HyperIP’s block-level compression. To learn more, please visit http://www.netex.com/blog/?p=206
I know. I’m sinking pretty low when I borrow a line from an animated gecko. But as I keep thinking that data backup and restore systems are very much like automobile insurance, I just can’t resist.
Think about it – what value do you get from paying for auto insurance, other than the peace of mind that should some fool run into you, you’ll be able to get back on the road in a reasonable amount of time and at a reasonable expense? The same is true with data backup: on its own, it offers little value while costing a lot of time and money, but you had better have one when something / anything goes wrong.
As with your auto insurance, you want to pay as little for backup/restore as possible, while meeting your service level objectives. There are choices to be made that impact your costs and your recovery capabilities – does your policy include towing, collision repair, or the use of a rental car while yours in the shop? And what is the out-of-pocket deductible you have to pay per accident?
Same thing with backup – which data do you protect, how often do you perform backup, how many versions and copies do you keep, how long do you keep them, where do you distribute them, how fast do you need to restore? All of these service level considerations can impact your costs.
At IBM, we recognize that on the one hand, your business requires the most advanced, reliable and scalable data protection solutions for your applications and data; and on the other hand, the investments in these solutions are nothing more than insurance – they don’t contribute to the top line, and they only contribute to the bottom line when they are called upon to recover operations following a data loss disaster.
We are helping our customers meet these conflicting challenges through an evolution of continuous improvements to our data protection and recovery software, led by Tivoli Storage Manager, that can dramatically improve your business continuity service levels while reducing your costs even more dramatically.
IBM is looking for customers and business partners who are interested in participating in an Early Access Program (EAP)/Beta Program for an upcoming release of FlashCopy Manager, Data Protection for SQL, and Data Protection for Exchange. If you would like to nominate your organization to participate in this EAP/Beta, please send an email to:
Mary Anne Filosa (firstname.lastname@example.org)
and be sure to include your organization's name. Once your email is received you will be sent instructions on signing off on the EAP/Beta legal form online and when that signoff has been completed, you will be sent a link to the program's nomination site. We encourage you to respond quickly if you are interested as the program begins in mid December.
Live Webcast: Using Tivoli Storage Productivity Center to be the "eyes" into your SAN environment, and to see how that environment is changing. LIVE!
In the ever changing SAN environment, Tivoli Storage Productivity Center has many components to help the Storage Administrator know when a where to focus their attention. We will walk through many of these in a live demo and see how they can be used.
Let TPC help you keep up with storage growth instead of working longer hours!
Scott McPeek, IBM Program Director, Storage Sales Enablement. He has worked in the software industry more than 30 years, the last ten years have been with IBM as part of the TrelliSoft SRM acquisition. Scott now focuses on storage resource management, storage performance management and virtualization with products like TPC, SVC and the Storwize V7000.
How are you spending your time this weekend? Polishing up your Pulse 2012 storage session abstract, hopefully! With only 4 days left to submit a 100-word abstract by Nov. 7, we thought it would be helpful to share some final pointers. Keep in mind that this year's theme
is Business Without Limits and we are seeking to understand how you
gained visibility, control and automation to deliver better business
What are the key benefits to you as a Speaker? One full Pulse conference pass ($1995 value) and the opportunity to gain visibility for your company, and take advantage of an incredible networking opportunity with over 7,000 industry experts, press, and analysts.
Here's some pointers on how to get your Storage Management session abstract accepted: 1. Focus it on topics such as how you used Tivoli Storage Manager to manage "big data"; success with recent upgrades; or cloud storage 2. Tell us about the key business challenges you were trying to solve, and how IBM Tivoli storage solutions helped you address these challenges 3. What was the ROI, or key results, from implementing a Tivoli storage solution, and what valuable lessons did you learn from the experience
Don't forget to register during early bird registration by December 16 if you do not plan to speak at Pulse and attend the conference
complimentary. Early Bird registration can save you up to $700 off
registering onsite! See you at Pulse 2012!
Well it's that time again, hard to believe, I know...PULSE call for papers has opened, and we want to have another banner year in the Tivoli Storage Sessions! Last year we were standing room only in many of our sessions and this year we hope to fill each room once again.
As for topic suggestions, we'd like to hear from customers who:
Use TSM to manage 'big data'
Have best practices, created with our Tivoli Storage portfolio that they want to share
In my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray – I briefly touched upon ‘storage tiering reports’. Now these reports are available as part of Tivoli Storage Productivity Center v4.2.2 announcement this week. In one of the latest Storage Wave studies by The InfoPro, it points out to ‘Tiered Storage Build Out’ as one of the top 3 initiatives among storage managers. Yet in a complex, virtualized environment, having complete visibility and control over storage tiering can be challenging.
Tivoli Storage Productivity Center provides capabilities for reporting on storage tiering activity to support data placement and to optimize resource utilization in a virtualized environment. The storage tiering reports leverage the estimated capability and actual performance data for IBM SAN Volume Controller and IBM Storwize V7000, and offers storage administrators with key insights such as: • Are the backend subsystems optimally utilized • Does moving a certain workload to low cost storage impact service levels • How to level out performance in a certain pool
• Which data groups can be moved to an alternate tier of storage
Image: Sample tiering distribution report
By having a comprehensive view of performance stress on the hardware, storage tiering reports enable administrators to make proactive decisions about volume placement, thus averting any downtime or impact on the data availability.
Tivoli Storage Productivity Center enables storage administrators to optimize disk configurations, such as by progressively and dynamically changing storage tier percentage distributions between high-end, mid-range and low-end storage. For example, an initial 70/30/0 split can be changed to a new distribution of 30/50/20, enabling the organization to realize the corresponding storage infrastructure savings.
To read more about Tivoli Storage Productivity Center, click here.
What’s new in Tivoli Storage Productivity Center?
IBM announces Tivoli Storage Productivity Center Select - a comprehensive storage management software that offers advanced provisioning, performance management, capacity optimization and reporting capabilities. Select includes all key capabilities of Basic Edition, Disk and Data modules of Tivoli Storage Productivity Center family, and is conveniently packaged for ‘per enclosure’ licensing.
Select complements Tivoli Storage Productivity Center for Disk Select (formerly Disk Midrange Edition) and is ideal for management of IBM XIV, Storwize V7000, DS3000, DS4000, DS5000 as stand-alone devices or when attached to an SAN Volume Controller. Select also supports any device that is attached to Storwize V7000.
TSM FastBack for Workstations is a centrally-managed solution that reduces the risks of losing important information stored on thousands of personal computers across an entire enterprise, as described here:
IBM will be running a beta program for the next release of this product, providing those taking part with early access to the latest planned enhancements. If you would like to participate, please contact the beta coordinator, Matthew Boult (email@example.com).
NEW!! Technical Services Webinar: Capacity Planning in a Tivoli Storage Manager Environment
As much as customers would like to "backup everything and keep it forever", storage is not unlimited. The reality of ever increasing data growth, combined with regulatory compliance and the associated risks make the arduous task of capacity planning for backup ever more critical. A new Reporting and Monitoring tool is available with Tivoli Storage Manager (TSM). This new tool, based on IBM Tivoli Monitoring, can collect and report on historical data and is an integral part of a capacity planning regimen.
This session will demonstrate a capacity planning methodology that conforms to the ITIL Capacity Planning process description by showing how the TSM Reporting and Monitoring tool and other TSM components can be utilized for to ease the pain of capacity planning. Additionally, this session will look at strategies, like data deduplication, to reduce the amount of backup data while maintaining regulatory compliance.
Presenters: Mark Vanderboll, IBM Tivoli Global Response Team Dave Daun, IBM Advanced Technical Skills
Did you see that we announced a new version of Tivoli Storage FlashCopy Manager!
Here are the highlights of IBM® Tivoli® Storage FlashCopy® Manager V3.1:
Advanced data protection and recovery features for VMware vSphere environments
Enhanced data protection capabilities for Microsoft® Windows®, including support for New Technology File Systems (NTFS) and custom applications, and enhanced user interfaces for Microsoft Exchange and Microsoft SQL Server
Support for IBM DB2® and Oracle databases (with or without SAP environments) on IBM AIX®, Solaris SPARC, Linux® x64, and HP-UX IA64 platforms
Support for custom business-critical applications on IBM AIX, Solaris SPARC, Linux x64, HP-UX IA64, and Microsoft Windows platforms
Transparent integration with IBM storage systems such as IBM System Storage® SAN Volume Controller space efficient FlashCopy target volumes, IBM Storwize® V7000, IBM XIV® Storage System, and IBM System Storage DS8000™
Can leverage the Microsoft Volume Shadow Copy Services (VSS) framework for integration with non-IBM hardware subsystems
Database cloning support for UNIX® and Linux clients
It will be generally available on October 21, 2011.
You should expect more from your storage, and from your storage vendor. On October 11 and 12, IBM is announcing a broad range of new and enhanced storage products that help to meet this challenge.
Included are significant updates to the Tivoli Storage Manager (TSM) family. TSM is already the data protection software leader in scalability, functionality, data reduction, performance and reliability. The v6.3 release will keep us ahead of the competition, and will help to keep you ahead of the challenges you’re facing.
Struggling with data growth? No problem.
The scalability of TSM is being doubled for the 3rd year in a row, now supporting up to 4 billion data objects in a single TSM Server. In 2008, the internal database limit was 500 million files, so we’ve made an 8X improvement since then. That means fewer backup servers are needed. And remember that TSM is a single server architecture; we don’t add “media servers” to provide scale.
Unified Recovery Management now includes Replication for faster Disaster Recovery
We’ve been working to simplify the data protection and recovery infrastructure by unifying the management of all the different tools you need for different applications, operating systems, data locations, and data loss causes. In the release of Tivoli Storage Manager Extended Edition v6.3, we’re adding client data and metadata replication to an off-site TSM Server. This provides a “hot standby” disaster recovery capability, managed from within the TSM Admin Center. The replication is asynchronous and can be scheduled on a per client basis to minimize the impact on network bandwidth. And it can be configured in a classic source-to-target model as well as between two active sources, many-to-one, or in a “round robin” architecture.
Simplifying the administrator’s life
One of the painful tasks an administrator has to do, especially in large environments, is patching the backup/archive client software on protected systems. With this release, we’re adding the ability to automatically push out client software updates across AIX, HP-UX, Linux and Windows systems (Windows push support was actually introduced last year). This new capability should reduce the time needed to perform an update by at least 80%.
Improved integration with VMware
Tivoli Storage Manager for Virtual Environments v6.3, our non-disruptive off-host solution for VMware virtualized servers, now supports VMware vSphere 5 and includes a plug-in for vCenter to easily manage TSM backup and restore operations from within the VMware environment. Tivoli Storage FlashCopy Manager v3.1 is also being released with VMware vStorage APIs for Data Protection integration and the vCenter plug-in to provide hardware-assisted application-aware snapshot management. Support for DB2, Oracle and SAP databases on HP-UX is also added in the new FlashCopy Manager release.
Something BIG for mainframe customers
Tivoli Storage Manager for z/OS Media v6.3 is a new connector that enables customers to leverage their mainframe-attached FICON storage devices for storing TSM data. This offering won’t increase the licensing costs for existing customers that move their TSM v5.x Server software from z/OS and install TSM v6.3 Server on an AIX system or a Linux on z partition, and gives them access to all of the cost-saving improvements made in TSM over the past 3 years.
The new standard in licensing simplicity
In June we announced the availability of the Tivoli Storage Manager Suite for Unified Recovery. This bundle of ten TSM and FastBack products is simply licensed by the amount of data being managed within the TSM environment, first copy only. We have seen outstanding results from this new offering, from both new and existing customers. The reason is simple: you want to use the right tool for each data protection job, but you don’t want to have to worry about acquiring and managing individual product licenses for each one. This is especially true in virtual server and cloud environments. Added benefit: our broad range of built-in data reduction technologies can dramatically reduce the costs of this offering.
Tivoli Storage Manager Suite for Unified Recovery v6.3 is also being announced, and includes all of the TSM and TSM for Virtual Environments enhancements noted above.
Many other improvements are being introduced across the family, including: better reporting and monitoring; better scalability for Microsoft Windows, Exchange and SQL Server; faster internal processes such as database backup; etc. SAP customers using TSM for Enterprise Resource Planning v6.3 can now do incremental backups on Oracle RMAN.
For more information on the Tivoli Storage Manager enhancements, please refer to the announcement letter on ibm.com (link)
To learn more about all the new IBM Storage announcements, please click here (live 12 Oct.)
Ron’s recent post on choosing the right storage hypervisor points out to ‘comprehensive performance monitoring’ as one of the key capabilities you need to successfully deploy cloud storage. This thought reinforces the need for sophisticated tools that can help you significantly reduce the burden on storage configuring (think of best practices) and performance monitoring.
Bottle neck analysis
It’s no longer the network administrator – when the system response is poor, it’s the storage administrator who gets the call. Especially in a virtualized environment, it is essential to have performance monitoring tools that provide a quick yet comprehensive view of the data path – to ascertain any bottlenecks. With Tivoli Storage Productivity Center, you will be able to see where the bottlenecks have occurred, for example one storage subsystem may be over utilized while another is underutilized.
Data Path Explorer offers detailed view of all the storage entities and their connectivity. It provides you performance information across the entire data path – from host to array – and allows to drill down and view performance metrics at the port-level. Standard Edition, the advanced module within Tivoli Storage Productivity Center, offers advanced reporting capabilities on bottleneck analysis.
According to a storage manager at a leading medical university, “With Tivoli Storage Productivity Center, I can quickly determine if there exists a bottleneck in the SAN infrastructure. Earlier it could take me days or sometimes weeks to figure that out. Now, I can do it in minutes”.
If you have recently deployed Tivoli Storage Productivity Center, make use of IBM’s Value Pack service offerings, which provide analysis of disk subsystem performance bottlenecks using native product tools. Talk to your IBM sales representative or IBM business partner for more information.
Configure your SAN the best practices way
Administrators are expected to ensure high availability for SANs. SAN configuration has traditionally been done manually. But as the complexity in managing the storage network grows, you need sophisticated tools to control and even optimize storage configurations. And adherence to best practices is essential for successful configuration and deployment of complex systems in your storage environment.
I touched upon the SAN Planner topic briefly in my previous post – and would like to delve little deeper in this one. As mentioned earlier, SAN Planner is a wizard-based tool that assists storage administrators in end-to-end planning involving all storage components and related networks. SAN Planner helps implement best practices pertaining to replication relationships; it utilizes current and historical performance metrics to recommend the best configuration while commissioning new storage systems.
There are three planners associated with recommending storage configuration changes, which are based on current workloads, capacity utilization and best practices:
Volume Planner – helps administrators in provisioning storage based on capacity, compression, RAID levels, etc. It includes replication planning as well, supporting sessions such as Metro Mirror, Global Mirror and FlashCopy. Zone Planner – provides zoning and LUN masking configuration support. Path Planner – assists in planning and implementing storage provisioning for hosts and storage systems with mutilpath support in fabrics.
All the three planners can be invoked separately or together in an integrated manner from Tivoli Storage Productivity Center console. Learn more about these planners and their capabilities: download the latest Redbooks.
As you can see, configuring SAN with Tivoli Storage Productivity Center is a child’s play, isn’t it? But can you check whether current SAN configuration conforms to best practices? Yes you can!
SAN Configuration Analyzer provides end-to-end check for configuration policies, ensures the correctness of storage network configurations, such as zoning, multipathing and replication. In addition, the tool sends alerts to administrators when the best practices are violated.
Storage networks are undergoing significant changes more often to accommodate changes in business policies and the ever growing data. Administrators are challenged to track configuration changes for problem determination, change management or auditing purposes. Tivoli Storage Productivity Center offers SAN Configuration History Viewer to provide a historic view of changes and eliminate time gap in determining problem areas associated with configuration changes.
To learn more about the IBM Tivoli Storage Productivity Center Select Series, contact your IBM sales representative or IBM Business Partner, or visit ibm.com.
Click here to join the virtual dialogue on Storage Hypervisor; share your thoughts and concerns in our group chat on October 7, 2011 from 12 noon to 1pm Eastern Time. You can log in now for a preview of topics.
This is part 3 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. Part II explained how a storage service catalog, self-service provisioning, and usage-based chargeback can be used to drive down cost. In this 3rd post, I’m going to share some thoughts that should help you be smarter about choosing a storage hypervisor.
The first step is to remind ourselves what we’re trying to accomplish with a storage hypervisor. From our experience deploying over 7000 storage hypervisors, the starting point for most folks is improved efficiency and data mobility. Remember, the basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, people are looking for the freedom to move a slice (or virtual volume) from tier to tier, from vendor to vendor, and more recently, from site to site all while the applications are online and accessing the data.
To pull off this level of mobility – in servers or storage – it’s important that the hypervisor not be dependant on the underlying physical hardware for anything except capacity (compute capacity in the case of a server hypervisor like VMware, storage capacity in the case of a storage hypervisor). Think about it… Wouldn’t it be odd to have a pair of VMware ESX hosts in a cluster, one running on IBM hardware and one on HP hardware, and be told that you couldn’t vMotion a virtual machine between the two because some feature of your virtual machine would just stop working? If you tie a virtual machine to a specific piece of hardware in order to take advantage of the function in that hardware, it sort of defeats the whole point of mobility. The same thing applies to storage hypervisors. Virtual volumes that are dependant on a particular physical disk array for some function, say mirroring or snapshotting for example, aren’t really mobile from tier to tier or vendor to vendor any more.
But it’s more than just a philosophical issue, there’s real money at stake (you may want to read what comes next a couple of times). In Part II of this post I discussed using a storage service catalog as a means of defining specific service level needs for your different categories of data. These service levels covered the gamut from capacity efficiency and I/O performance (for you techies that’s RAID levels, thin provisioning, use of solid state disks, etc), to data access resilience and disaster protection (multi-pathing, snapshotting, mirroring…). The reason so many datacenters have an over abundance of tier-1 disk arrays on the floor is because, historically, if you wanted to take advantage of things like thin provisioning, application-integrated snapshot, robust mirroring for disaster recovery, high performance for database workloads, access to solid-state disk, etc… you had to buy tier-1 ‘array capacity’ to get access to these tier-1 ‘storage services’ (did you catch the subtle difference?) Now, I don’t have anything against tier-1 disk arrays (my company sells a really good one). In fact, they have a great reputation for availability (a lot of the bulk in these units are sophisticated, redundant electronics that keep the thing available all the time). But with a good storage hypervisor, tier-1 ‘storage services’ are no longer tied to tier-1 ‘array capacity’ because the service levels are provided by the hypervisor. Capacity…is capacity…and you can choose any kind you want. Many clients we work with are discovering the huge cost savings that can be realized by continuing to deliver tier-1 service (from the hypervisor), only doing it on lower-tier disk arrays. As I noted in Part II of this post, we’ve seen clients shift their mix of ‘array capacity’ from 70% tier-1 to 70% lower-tier arrays while continuing to deliver tier-1 ‘storage services’ to their data. ThisYouTube video describes an example of that at Sprint.
Smart idea #1: Be careful to understand what, if any, dependency a storage hypervisor has on the capability of an underlying disk array to deliver function to your virtual volumes (like thin provisioning, compression, snapshotting, mirroring, etc.)
Next thought. There are three rather interrelated solution categories in the area of dealing with outages and protecting data.
Disaster avoidance (“I know the hurricane is coming, let’s move the datacenter further inland”)
Disaster recovery (“oh oh, the hurricane hit, and my datacenter is dead”)
Data protection (“oops, I goofed up my data and I need to recover”)
IT managers we talk to have been successfully dealing with disaster recovery (for the techies, that’s array mirroring along with recovery automation tools like VMware Site Recovery Manager (SRM), IBM PowerHA, or others) and data protection (that’s array snapshotting along with specific connectors for databases, email systems etc as well as connectors to enterprise backup managers like Tivoli Storage Manager) for years. This third area of disaster avoidance has really gained steam because storage hypervisors now allow you to access the same data at two locations giving you the ability to do an inter-site application migration with things like VMware vMotion, PowerVM Live Partition Mobility (LPM), or others. When you are expecting a disaster, disaster avoidance let’s you transparently get out of the way. But it doesn’t magically keep all the other unexpected bad things from happening. You’ll still want to be prepared with disaster recovery and data protection. And if you are implementing a storage hypervisor, you shouldn’t be forced to choose.
Smart idea #2: Remembering smart idea #1, be sure that your storage hypervisor has its own ability to provide for disaster avoidance (inter-site mobility), disaster recovery (mirroring that’s integrated with recovery automation tools) and data protection (snapshotting that’s integrated with applications and backup tools).
One final thought. A storage hypervisor isn’t an island unto itself. Like a server hypervisor, it exists in a broader datacenter. Administrators need to be able to see it in the context of the disk arrays it manages, the servers (or virtual machines) that use its virtual volumes, the applications that need backups or clones, the disaster recovery automation that’s coordinating recovery of servers, storage, networks… You get the picture. When the challenges of day-to-day operations happen (and they do happen most every day)…
…the storage network planner needs to look at the logical data path from a virtual machine to its physical server, through the SAN switch, to the storage hypervisor and finally to the physical disk array. He’ll need that storage hypervisor to be integrated with a SAN topology tool.
…an application owner calls up with a performance issue that he’s blaming on ‘the storage’. You’ll need to be able to isolate performance across the whole data path (including the part of the path that goes through the storage hypervisor).
…an application owner wants a consistent snapshot of his application to use as a backup copy (or a test clone). You’ll need a connector that talks to both the application and the storage hypervisor to identify the virtual volumes that need to be snapshotted, prepare the application for the snapshot, and then provide the application owner with an inventory of snapshots he can use to recover from.
…you make the move toward cloud techniques in your private datacenter – implementing a storage service catalog, self-service provisioning, and usage-based chargeback. You’ll need a storage hypervisor that can be auto provisioned and that can provide the metrics on who is using how much storage.
Smart idea #3: Make a list of all the day-to-day operational things you do today with physical storage, and the things you hope to automate in the future, and be careful to understand if your storage hypervisor is sufficiently instrumented and integrated – or if it’s creating a new island to be separately managed.
And now a word from our sponsors :-) IBM offers the worlds most widely deployed storage hypervisor. With over 7000 deployments, hundreds in the newer inter-site disaster avoidance configuration, we’ve had a lot of opportunity to build some depth. As you evaluate using cloud storage techniques in your private enterprise, you’ll find things I talked about in this blog series available in IBM products today. They can help you save your company a pile of money (and make you look like a genius while you’re doing it).