Gartner’s Magic Quadrant for SRM and SAN Management software is one of the leading industry publications that provides competitive benchmarking across storage management capabilities and helps support decision making for investments in storage management software. In its latest edition, Gartner positions IBM in the ‘Leader’ quadrant.
IBM Tivoli Storage Productivity Center (TPC) is a clear leader in the SRM market; many enterprises are using TPC today to manage their ever-growing, complex and highly critical storage environments.
TPC is designed to provide comprehensive device management capabilities that include automated system discovery, provisioning, configuration, performance monitoring and replication for storage systems and storage networks. TPC provides storage administrators a simple yet effective way to conduct storage management for multiple storage arrays and SAN fabric components from a single integrated management console.
TPC edges out all other vendors in terms of comprehensively achieving the vision for SRM. TPC provides storage management capabilities that allow administrators to efficiently simplify, centralize, optimize and automate storage management tasks. View the Gartner Magic Quadrant for Storage Resource Management and SAN Management Software, compliments of IBM, here. _________________
If you haven’t unleashed the potential of TPC, watch out for the upcoming version 5.1 release – slated to be announced on June 4, 2012 at the IBM Edge2012. _______________
To learn more, please register for IBM's premier storage conference: IBM Edge2012 being held June 4-8 in Orlando, Florida. This is a 4.5 day conference, 100% focused on IBM storage solutions - with many TPC 5.1 and IBM SmartCloud Virtual Storage Center sessions and customer speakers. Tivoli speakers will be featured throughout the conference and more than 30 sessions will be focused exclusively on Tivoli’s entire suite of products, taught by IBM Distinguished Engineers, leading product experts, clients and partners. Special registration discount applies to all Pulse 2012 attendees! Register here.
Note: This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available here http://www.gartner.com/technology/reprints.do?id=1-1A16V0B&ct=120405&st=sb __________________ Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database. DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy:
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.
IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver material, code, or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
You should expect more from your storage, and from your storage vendor. On October 11 and 12, IBM is announcing a broad range of new and enhanced storage products that help to meet this challenge.
Included are significant updates to the Tivoli Storage Manager (TSM) family. TSM is already the data protection software leader in scalability, functionality, data reduction, performance and reliability. The v6.3 release will keep us ahead of the competition, and will help to keep you ahead of the challenges you’re facing.
Struggling with data growth? No problem.
The scalability of TSM is being doubled for the 3rd year in a row, now supporting up to 4 billion data objects in a single TSM Server. In 2008, the internal database limit was 500 million files, so we’ve made an 8X improvement since then. That means fewer backup servers are needed. And remember that TSM is a single server architecture; we don’t add “media servers” to provide scale.
Unified Recovery Management now includes Replication for faster Disaster Recovery
We’ve been working to simplify the data protection and recovery infrastructure by unifying the management of all the different tools you need for different applications, operating systems, data locations, and data loss causes. In the release of Tivoli Storage Manager Extended Edition v6.3, we’re adding client data and metadata replication to an off-site TSM Server. This provides a “hot standby” disaster recovery capability, managed from within the TSM Admin Center. The replication is asynchronous and can be scheduled on a per client basis to minimize the impact on network bandwidth. And it can be configured in a classic source-to-target model as well as between two active sources, many-to-one, or in a “round robin” architecture.
Simplifying the administrator’s life
One of the painful tasks an administrator has to do, especially in large environments, is patching the backup/archive client software on protected systems. With this release, we’re adding the ability to automatically push out client software updates across AIX, HP-UX, Linux and Windows systems (Windows push support was actually introduced last year). This new capability should reduce the time needed to perform an update by at least 80%.
Improved integration with VMware
Tivoli Storage Manager for Virtual Environments v6.3, our non-disruptive off-host solution for VMware virtualized servers, now supports VMware vSphere 5 and includes a plug-in for vCenter to easily manage TSM backup and restore operations from within the VMware environment. Tivoli Storage FlashCopy Manager v3.1 is also being released with VMware vStorage APIs for Data Protection integration and the vCenter plug-in to provide hardware-assisted application-aware snapshot management. Support for DB2, Oracle and SAP databases on HP-UX is also added in the new FlashCopy Manager release.
Something BIG for mainframe customers
Tivoli Storage Manager for z/OS Media v6.3 is a new connector that enables customers to leverage their mainframe-attached FICON storage devices for storing TSM data. This offering won’t increase the licensing costs for existing customers that move their TSM v5.x Server software from z/OS and install TSM v6.3 Server on an AIX system or a Linux on z partition, and gives them access to all of the cost-saving improvements made in TSM over the past 3 years.
The new standard in licensing simplicity
In June we announced the availability of the Tivoli Storage Manager Suite for Unified Recovery. This bundle of ten TSM and FastBack products is simply licensed by the amount of data being managed within the TSM environment, first copy only. We have seen outstanding results from this new offering, from both new and existing customers. The reason is simple: you want to use the right tool for each data protection job, but you don’t want to have to worry about acquiring and managing individual product licenses for each one. This is especially true in virtual server and cloud environments. Added benefit: our broad range of built-in data reduction technologies can dramatically reduce the costs of this offering.
Tivoli Storage Manager Suite for Unified Recovery v6.3 is also being announced, and includes all of the TSM and TSM for Virtual Environments enhancements noted above.
Many other improvements are being introduced across the family, including: better reporting and monitoring; better scalability for Microsoft Windows, Exchange and SQL Server; faster internal processes such as database backup; etc. SAP customers using TSM for Enterprise Resource Planning v6.3 can now do incremental backups on Oracle RMAN.
For more information on the Tivoli Storage Manager enhancements, please refer to the announcement letter on ibm.com (link)
To learn more about all the new IBM Storage announcements, please click here (live 12 Oct.)
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere. (as opposed to physical array-boundary limitations)
Storage services are standardized – selected from a storage service catalog. (as opposed to customized configuration)
Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog. (as opposed to manual component-level provisioning)
Storage usage is paid per use – end users are aware of the impact of their consumption and service level choices. (as opposed to paid from a central IT budget)
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
In Part I of this post:I’ll explain the value of virtualizing storage resources. Hint: you’ve likely already done it to your server resources with some sort of server hypervisor like VMware vSphere, or IBM PowerVM, or Microsoft Hyper-V… so now let’s look at what you get from doing it to your storage resources with a storage hypervisor.
In Part II of this post: I’m going to explain how public storage clouds use management controls like service catalogs, self-service provisioning, and pay-per-use to drive down their costs. I’ll also try to offer some practical ideas for using these techniques in a private enterprise setting to gain similar efficiencies.
In Part III of this post: I’m going to explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
Ready to jump in?
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
In August, Gartner published a paper that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
Perhaps the most obvious expectations are improved efficiency and data mobility. The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration. Just recently, a hurricane hit the eastern cost of the United States. If your datacenter had been in the projected path of that hurricane and if you had implemented both a server hypervisor (let’s say VMware vSphere for your Intel servers and IBM PowerVM for your Power systems), and a storage hypervisor platform (let’s say IBM SVC), then here’s what you might have said: “Hey, the hurricane is coming, let’s move operations to another datacenter further inland…” IBM SVC Stretched-cluster allows you to access the same data at both locations giving you the ability to do an inter-site VMware vMotion and PowerVM Live Partition Mobility (LPM) move – non-disruptively. As far as the end users are concerned, their applications are running in a private cloud. For you… you avoided a disaster and got to sleep well that weekend.
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Thin provisioning: You have a client that asks for 500GB of new capacity. You’re going to give it to him as thin provisioned virtual capacity which is a fancy way of saying you’re not going to actually back it with real physical storage until he writes real data on it. That helps you keep cost down.
Compression: Same guy also asks to keep several snapshot copies of his data for recovery purposes. You’re going to start by giving him thin provisioned capacity for those snapshots, but you’re also going to compress whatever data those snapshots produce – again adding to your efficiency.
Agnostic about vendors: Because you’re providing virtual storage resources from a storage service catalog (we’ll talk more about that in Part II of this post), you have the freedom to shift the physical storage you operate from all tier-1 to a more efficient mix of lower tiers, and while you’re doing it you can create a little competition among as many disk array vendors as you like to get the best price / support.
Smart about tiers: If you shut your eyes real tight and think about the concept of a “virtual” disk that is mobile across arrays and tiers, you’ll quickly start asking questions about having the storage hypervisor watch for I/O patterns on blocks within that virtual disk that would benefit from higher tier capacity, like solid-state (SSD) or flash disk for example. A good storage hypervisor will automate the detection of such patterns and move hot data blocks to these highest tiers of storage if you have them.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
I have a friend - he's a sales guy, but he's still a friend - that once had his laptop stolen from his rental car. This put him in a world of hurt. He hadn't been backing it up, so he lost everything that hadn't been transferred to some other system. This included several very large proposals that he had been working on for weeks, as well as information on many contacts and communications. He ended up losing at least one of those deals. All while spending many hours trying to rebuild his professional life. To say that he has been less than effective for the past few weeks would be an understatement.
How about you? If you are gainfully employed and not using a virtual desktop or other cloud service, you likely create, store and manage a great deal of important information on your personal computer. Do you back it up regularly? If you do, have you tested your backups lately to ensure you can actually recover your files if something - anything - goes wrong?
I remember seeing reports from analysts a few years ago that estimated as much as 20% of all corporate data is stored only on employee workstations. This results in a tremendous amount of total risk for organizations, even if the impact of a lost laptop only impacts one employee and the projects and teams that he or she is working on. Leaving the critical task of backup up to these employees has been the normal approach for most businesses, but that's obviously asking for trouble -- most of us will not remember to run a backup, if we even knew how.
And when central IT doesn't provide a solution, the smarter employees (usually the ones that have already experienced critical data loss) will take matters into their own hands. An increasingly popular solution for these employees is to use public cloud services, like DropBox -- but think about that: do you want your sensitive, corporate information stored on a public cloud service? Yikes!!!
It's time for the IT department to take ownership of this data and provide tools for data protection and recovery that are easy-to-use and don't get in the way of employee productivity. IBM offers an excellent solution: IBM Tivoli Storage Manager FastBack for Workstations. As the name implies, it is part of the industry leading family of scalable, robust, high performance data protection and unified recovery software solutions. It is fully integrated with Tivoli Storage Manager (TSM) to provide seamless, centralized protection of PCs within the context of enterprise-wide data protection. Or it can also be used standalone, without TSM.
FastBack for Workstations runs in the background, without any intervention needed by the employee. Whenever a file is saved to disk, a copy is made in the local backup repository automatically. That copy can then be replicated, again automatically and transparently, whenever the PC is connected to a network. The replication target can be a TSM Server, a file server, a cloud service, etc. So you have a copy locally to quickly restore individual files when you do something dumb, and you have a full copy off of your system to do a full restore when something worse happens. The central administration console can manage thousands of endpoints.
You can now try FastBack for Workstations for free. Just go to this link. You will quickly see just how easy it is to set up your backup policies, run your first backup, and be able to perform restores with a simple drag-and-drop operation.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Live Demo! IBM Tivoli Storage Manager FastBack and IBM Tivoli Storage Manager FastBack for Exchange Scheduled Dates in 2010
Mark Your Calendars! IBM will be presenting a series of live demonstration dedicated to showing the value of IBM Tivoli Storage Manager (TSM) FastBack and TSM FastBack for Exchange data protection products. These additions to the TSM product family offers the ability to meet aggressive Recovery Point and Recovery Time Objectives in an organization's data protection service.
The TSM FastBack family provides many advanced features including: Instant Restore allows users to access to their data or application immediately, while the restore is taking place. Continuous Data Protection sends backup data continuously which allows a recovery to be done to any point in time. Incremental Forever Backups prevents wasting time and money in performing and storing unnecessary full backups. Each backup appears to be a Full backup, but only the blocks that have been modified are copied. FastBack Mount allows access to backed up data without it being recovered. This enables data to be validated after backups, the correct data to be identified before it is recovered, or data to be opened and its contents to be recovered at a more granular level, thus reducing the size and time of the recovery. Exchange Brick-level Recovery allows individual Exchange mail objects to be recovered from a previous backup without requiring an entire Exchange Database to be recovered. TSM FastBack for Exchange does not require additional backup processing to provide IMR. Branch Office Disaster Recovery allows replication of branch office backup data to a central site. This data can be compressed and encrypted during the transfer. The replicated data at the central site can be used as the source for creating a tape copy of the data or for recovering branch office data and hosts. TSM FastBack allows the backups and replication of multiple branch offices to be monitored with a single tool. TSM FastBack Bare Machine Recoveryallows hosts to be quickly recovered, even to dissimilar hardware.
These demonstrations are open to Customers, Business Partners and IBM employees.
TSM FastBack Demo Schedule for 2010: Demos will be available in English and Spanish. All English calls will be at 10:30 AM and 3:00 PM Central Time on Thursdays. All Spanish calls will be available at 1:00 PM Central Time on Wednesdays.
February: Wednesday 10th - Spanish 1:00 PM CT, Thursday 11th - English 10:30 AM CT , Thursday 25th - English 3:00 PM CT
March: Wednesday 10th - Spanish 1:00 PM CT, Thursday 11th - English 10:30 AM CT, Thursday 25th - English 3:00 PM CT
April: Thursday 8th - English 10:30 AM CT, Wednesday 14th - Spanish 1:00 PM CT, Thursday 22nd - English 3:00 PM CT
May: Wednesday 12th - Spanish 1:00 PM CT, Thursday 13th - English 10:30 AM CT, Thursday 27th - English 3:00 PM CT
June: Wednesday 9th - Spanish 1:00 PM CT, Thursday 10th - English 10:30 AM CT, Thursday 24th - English 3:00 PM CT
July: Thursday 8th - English 10:30 AM CT, Wednesday 14th - Spanish 1:00 PM CT, Thursday 22nd - English 3:00 PM CT
August: Wednesday 11th - Spanish 1:00 PM CT, Thursday 12th - English 10:30 AM CT, Thursday 26th - English 3:00 PM CT
September: Wednesday 8th - Spanish 1:00 PM CT, Thursday 9th - English 10:30 AM CT, Thursday 23th - English 3:00 PM CT
October: Wednesday 13th - Spanish 1:00 PM CT, Thursday 14th - English 10:30 AM CT, Thursday 28th - English 3:00 PM CT
November: Thursday 4th - English 10:30 AM CT, Wednesday 10th - Spanish 1:00 PM CT, Thursday 18th - English 3:00 PM CT
December: Thursday 2nd - English 10:30 AM CT, Wednesday 8th - Spanish 1:00 PM CT, Thursday 16th - English 3:00 PM CT
There are Web Conference and Audio Conference components to this demonstration. Web Conference www.sametimeunyte.com Conference ID is FASTBAK Prior to the web conference, we suggest you do the following: 1) go to www.sametimeunyte.com 2) click on Support 3) click on Lotus Sametime Unyte Meeting System Check 4) Select attendee type and click Next 5) Proceed with the system check and install any plug-ins required.
English Live Demo Audio Conference: Title: TSM Fastback LIVE Demo Passcode: FASTBACK Toll Free: 800-857-4143 Toll: 773-756-0845
Riverbed and IBM enjoy a strong partnership which, thanks in part to Riverbed’s Whitewater cloud storage gateways, extends to IBM’s storage management software ecosystem. Whitewater leverages public cloud storage to reduce backup and administration costs, improve disaster recovery readiness and provide secure off-site storage for critical backup data, providing LAN-like access to public cloud storage in a drop-in appliance.
What does this mean for the Riverbed/IBM partnership? A seamless integration with existing IBM Tivoli Storage Manager backup infrastructure and cloud-storage providers, paving the way to extracting more value from existing storage, application and network investments. Tivoli Storage Manager administrators can leverage Whitewater’s local caching and public cloud storage abilities to propel them into the next generation of storage and disaster recovery, leaving classic disk- and tape- based devices (and their operational and maintenance costs) behind. Together, Riverbed and IBM offer a best-of-breed solution which slashes costs and enables almost unlimited scalability, taking full advantage of the flexibility and cost savings offered by storage-cloud services.
Riverbed will be demonstrating how fast it can move TSM data to public cloud storage at IBM Pulse 2012 in Las Vegas, March 4-6. At the show, come by booth E-105 to ask for a Whitewater demonstration and learn more about how Riverbed can optimize and extend your TSM environment as well as accelerate your WAN with the Riverbed Steelhead product family.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
At the recent Gartner IOM 2010 conference in Orlando, Florida, I had the good fortune of listening to a series of interesting topics and meeting some really smart people. As one might have guessed, the bulk of the sessions focused on virtualization and cloud topics. But the one topic that piqued my interested was unrelated to virtualization and cloud - it was deduplication and was hosted by Dave Russell.
The intent of the session was to bring forward a some customer examples that were deploying deduplication technologies in their backup and recovery solutions. Most of you that read this blog know that deduplication and data reduction have been a hot topic in the industry. And as you likely know, almost every major vendor out there offers some form of deduplication with its associated benefits.
This session provided us two customers who were willing to talk about their experiences with deduplication and the benefits they've received. One customer is using CommVault and the other is using IBM Tivoli Storage Manager v6 (TSM). While both customers showcased the quantified benefits from deduplication, the presentation from the TSM customer went beyond just the benefits of deduplication. The TSM customer revealed their quantified benefits and also identified some of the best practices they developed regarding deduplication.
This particular TSM customer is a large producer of natural gas in the U.S. The customers environment has TSM managing about 1.3 petabytes of data from over 1500+ nodes. Overall, their approach to managing backup storage is do it as efficiently as possible and to reduce the overall amount of backup data under management.
Prior to leveraging TSM deduplication, this customer began with "incremental forever" and compression. Once TSM v6 was released, they adopted deduplication at the server and client in concert with the other data reduction features provided by TSM.
As they began evaluating their use of deduplication, they had to deal with demands from their internal customers - DBA and Exchange admins like full backups etc. Furthermore, they had to consider how their rate of data change, evaluate retention policies, and ensure that their restore requirements weren't negatively impacted by the use of deduplication.
After significant testing and planning, the customer decided that they would initially deploy deduplication for their Oracle databases and Windows OS and system state backups. The results of using TSM deduplication were impressive ...
Oracle deduplication results - 75% reduction of Oracle backup data after deduplication. This was on 3.8TB of physical space on disk and about 15 TBs of data on tape.
And their results on Windows OS and System State were a whopping 94% ... taking them from 172GB of managed data down to 11.4 GB. In this scenario, the customer leveraged TSM 6.2 client- or source-side deduplication.
Overall, very impressive results. By leveraging the data reduction features within TSM, the customer was able to save by using less tapes library cells, tape drives, and disks.
In the end, the customer stated that TSM data reduction (with deduplication) helped them meet their objectives - efficiently reduce data under management. Furthermore, it allows them to reduce their overall HW costs and meet or improve restore requirements. The last comment the customer made before closing the session was that with all the various TSM data reduction capabilities in production, their job had ultimately gotten simpler now that their environment was running more efficiently ...
This is a fantastic story that I really enjoy sharing. If you are a TSM customer and have benefited from its data reduction technologies, then please give me a shout as I would like to hear your story as well.
There are a few important things to take note of. Microsoft Exchange Server 2010 included some significant changes, a number of which affect backup and restore. For example, under Exchange Server 2010:
Legacy-style backups (aka "streaming" backups) are no longer supported by Microsoft
VSS-style backups are the only supported online backup method
Exchange storage groups were removed completely
The Recovery Database replaced the Recovery Storage Group (RSG)
Database Availability Groups (DAGs) have replaced LCR, CCR, and SCR replication
Single Copy Clustering (SCC) is no longer available
With the release of Data Protection for Exchange version 6.1.2 and IBM Tivoli Storage FlashCopy Manager version 2.2 on June 4, 2010, we have implemented support for these changes. Here are details about the TSM functionality for Exchange Server 2010 that will be available on June 4, 2010:
Full Exchange Server 2010 support
Command-line Interface (CLI)
Graphical User Interface (GUI)
Database Availability Group (DAG) support
Query Exchange Information
Shows all databases with various attributes
Shows VSS component information
Full, Copy, Incremental, Differential
Back up from production database
Back up from a passive database copy (replica)
Back up to TSM Server
Back up to LOCAL snapshot
Offloaded backup to TSM
Shows all backups with their attributes
Restore from TSM Server
Restore into production database
Restore into "Recovery Database"
VSS Instant Restore from LOCAL snapshot
VSS Fast Restore from LOCAL snapshot
Mailbox Restore (IMR) and Item-Level Recovery
FlashCopy Manager and MMC Integration
Note: VSS backups to the TSM Server are enabled without the requirement for a TSM for Copy Services or FlashCopy Manager license.
Finally, a number of you were a part of the FlashCopy Manager 2.2 Beta Program and/or the Data Protection for Exchange 220.127.116.11 "Limited Availability" program, so thank you for helping us make it a great release!
As an IBM marketing manager, my job includes writing about storage technology.This post is about more than technology, though.It’s about a new breakthrough capability for managing storage costs and service levels.
I recently met with IBM Distinguished Engineer, Mike Sylvia, who has been working on a Business Transformation project to enable automated right tiering for storage in IBM data centers.Right tiering is the notion that data should be hosted on the optimal storage tier to balance cost and performance requirements.
Mike explained that applications tend to be hosted on top tier storage.When he analyzed actual usage patterns, Mike found most data can be effectively hosted on lower cost storage.Mike’s project put numbers to a problem that is often hidden from view and, until now, nearly impossible to solve.
Hosting data on the wrong storage tier turns out to be a huge efficiency problem.Mike predicts IBM will save $13 million over 3 years in one data center, by periodically moving data to the right tier.During the pilot, users saw their cost for storage drop by 50% per TB on average.This is big.
Like many advancements, IBM’s automated right tiering capability is accomplished by integrating existing technology.Mike Sylvia’s project combines storage virtualization, storage management automation and analytics.Today, IBM offers the technology in a bundled solution called SmartCloud Virtual Storage Center.
How does it work?
Step 1.IBM’s storage virtualization controller collects detailed usage metrics about storage it manages throughout the data center, without impacting application performance.
Step 2.IBM’s Storage Analytics Engine studies usage patterns over time to understand performance requirements.
Step 3:Storage tier recommendations are generated in reports that can be shared with application owners and IT management.
Step 4:Storage virtualization enables online data migration, with no disruption to applications or users.
Repeat:Usage patterns change over time, of course, so right tiering becomes an ongoing process.
Why does it work?
Automated right tiering delivers the efficiency benefits of Information Lifecycle Management without the headaches and hidden costs.Automated right tiering has significant benefits for both data owners and IT leaders, so everyone wins.
For example, application and database owners can gain the following benefits:
Applications can move to top tier storage when they need it, without waiting for a maintenance window.
Average storage costs drop significantly, without a drop in services.
IT leaders benefit, too.For example:
Storage tier decisions are based on analysis of actual usage patterns, not predictions.Storage performance management tasks are eliminated.
Data can quickly and easily be moved back to its original storage tier if requested, without incurring an outage.
IBM automated right tiering works with most storage systems, so deployment is nondisruptive.
The technology that enables automated right tiering has significant additional benefits, such as the ability to eliminate scheduled outages for storage system maintenance.
Problem solved.How has your organization addressed the storage right tiering challenge?
Today (June 4), IBM announces an enhanced Tivoli Storage Productivity Center v5.1 (TPC) that offers superb usability, unmatched reporting and integrated packaging like no other. Customers, sellers and partners are all excited, quite expectedly.
When we previewed the new user interface in Pulse’12, there were many in the audience who wanted to get access to it right away. The new user interface is in line with IBM’s strategy to offer consistent user experience across its major storage offerings – look and feel is great, navigation is breeze and most importantly, quick access to any information from the main dashboard is simply terrific.
The new dashboard view of TPC…
With v5.1, you can access your TPC management console through web. The dashboard not only shows you the capacity and connectivity information, but also details on event alerts, with criticality info, if any.
Entity based views are quite refreshing too. Refer to the sample image below – it shows the overview of a Storwize V7000 system. From this overview screen, you can understand the utilization, activities, data throughput, among many other things.
Click here to watch a short video on ‘TPC’s new user interface’.
TPC is now integrated with IBM Cognos - industry-leading business intelligence software capabilities are now brought to you to manage your storage environment more easily and efficiently. Cognos allows you to simply drag and drop metrics for you to assemble meaningful insights – and interestingly, these do not require advanced skills or writing SQL codes.
A sample report created through Cognos…
Well, now the wait is over. To get access to the new user interface and the Cognos-based reporting, talk to your IBM sales representative or IBM business partner today.
Download TPC data sheet. View the 2012 Gartner Magic Quadrant for Storage Resource Management and SAN Management Software, compliments of IBM, here.
IBM today announced the upcoming availability of the IBM Storwize V7000, a groundbreaking new midrange storage system. This new solution brings enterprise-class functionality, scalability and management to the midmarket at an attractive price point. All of the storage built into the Storwize V7000 is virtualized, and it can also be extended to virtualize other storage systems in your environment, to leverage your investment in them while simplifying storage management and improving utilization. Cha-ching!
As customers begin to evaluate and deploy the Storwize V7000, they will naturally look at options for protecting the business critical data that they will be storing on it. The system has built-in FlashCopy® snapshot software, and is available with metro and global mirroring software for high availability and disaster recovery.
These replication solutions are priced aggressively, however they rely on expensive fibre-based networks to transfer data. These network connections can be quite expensive, especially for midsized businesses, and IBM has another, more cost-effective solution available for off-site disaster recovery.
IBM Tivoli Storage Manager FastBack will selectively replicate Windows and Linux application data from your IBM Storwize V7000 to another location, anywhere in the world, over IP-based (WAN, Internet, Intranet) networks. FastBack’s block-level incremental-forever data capture, with built-in data deduplication, is highly network bandwidth efficient. And since it performs its data acquisition in the background, as often as needed to support tight Recovery Point Objectives (RPO), there is no backup window to concern yourself with.
In addition to using FastBack for cost-effective, IP-based disaster recovery, you can leverage FastBack’s local, near-instant recovery capabilities to restore files, databases, or even entire volumes following almost any type data loss.
For more information of Tivoli Storage Manager FastBack, please visit:
Data Reduction Chapter 8: Deduplication with Tivoli Storage Manager 6, FastBack and ProtecTIER
So far in this series, we’ve detailed the challenges that the tidal wave of data is placing on storage administrators, and how a smarter, more holistic and comprehensive approach to data reduction is needed to survive in a way that let’s you do more with less.
We covered eliminating the largest source of duplicate data (full backups) and automating the migration, archiving and deletion of older data. Then, in chapter 7, we covered the basics of data deduplication. Now we’ll detail the differences between IBM’s deduplication offerings, and when to best use each.
Let’s talk first about the deduplication capabilities of Tivoli Storage Manager (TSM). This feature is included at no additional charge for TSM 6 Extended Edition customers. This solution can help to reduce recovery times by enabling you to store more backup data and recovery points on disk rather than tape. It works with the data from all sources – via normal backups, data imported via the TSM API, as well as archive and HSM data. TSM deduplicates your disk-based data pools as a post-process, so there is no impact on backup performance. After running, it automatically reclaims the storage that has been freed up.
TSM already eliminates the most common cause of duplicate data – full backups – so the reduction ratios you can expect from TSM’s deduplication solution are fairly modest – the average is about 40%. But when combined with its progressive incremental backup approach and built-in data compression, TSM’s effective data reduction rate is extremely competitive with any other solution on the market, as has been detailed in a commissioned report written by Enterprise Strategy Group (ESG), available here (fair warning – registration required – sorry):
Announced today, Tivoli Storage Manager FastBack v6.1 also includes target-side data deduplication to help reduce the capacity required in the FastBack backup repository, adding to its value as the leading near-instant recovery solution on the market for business critical Windows servers and remote/branch offices. Also announced today was Linux support and tighter integration with the Tivoli Storage Manager Integrated Solutions Console (ISC), delivering on IBM’s vision of true enterprise-wide Unified Recovery Management.
IBM System Storage ProtecTIER is a technology leader in performance, scalability, data integrity and reliability. In true apple to apple comparisons this solution is the fastest on the market in real customer environments. A single ProtecTIER system can easily scale in both performance (1000MB/sec) AND capacity (1PB of deduplicated data). ProtecTIER is one of the few solutions that doesn’t rely on a hash algorithm and performs a byte level differential to ensure data is a duplicate for enterprise class data integrity. And ProtecTIER features all IBM best of breed components versus inexpensive OEM'd parts found in competitive products.
ProtecTIER has been proven in very large production environments and is supported worldwide by IBM’s services operations. The TS7650 ProtecTIER Deduplication Family ranges from small (7TB) to medium (18TB) to large-scale (36TB) appliances. And the TS7650G gateway offerings allow you to add the storage of your choice, up to 1PB. Active-Active cluster configurations also provide high availability capabilities.
Review - Choosing TSM or ProtecTIER for Data Deduplication
While TSM works very well in ProtecTIER environments, you wouldn’t use both TSM deduplication and ProtecTIER deduplication simultaneously. That would require twice as much work for no additional benefit. So when should you choose one over the other? Both solutions offer the benefits of target side deduplication: greatly reduced storage capacity requirements (especially when using TSM’s progressive incremental backup). You’ll have lower operational costs, energy usage and Total Cost of Ownership. You also get faster recoveries with more data on disk.
Use TSM 6 built-in data deduplication when you desire that deduplication operations be completely integrated within TSM. You want the benefits of deduplication without the costs of separate hardware or software – it ships for free with TSM 6 Extended Edition. Or you desire end to end data lifecycle management with minimized data store requirements.
Use ProtecTIER when: • You need the highest performance up to 1000 MB/sec or more • You have a large amount of data and need scalable capacity and performance • You need inline deduplication to avoid the operational impact of post processing • You are deduplicating across multiple TSM (or other backup) servers • You don’t have TSM and are performing weekly full backups.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 9, where we’ll summarize IBM’s holistic approach to data reduction and show you how we can help you survive the tidal wave of data.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."