In response to: Viral Friday - Pulse 2010 - N series DemonstrationThanks Tiffeni for the nice video.
IBM Systems Storage Software Blog
Rajendran Subramaniam 060000D5Y9 email@example.com Tags:  storage disk ibmpulse pulse nas nseries ibm storage-blog netapp ibmstorage pulse2010 1,589 Visits
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  archive data-reduction restore risk-management tivoli service-management deduplication compliance storage-blog unified-recovery-manageme... recovery business-continuity backup data-protection disaster-recovery retention 2,434 Visits
In chapter 1, I described how the planet is becoming ‘smarter’ and that this transformation is creating enormous amounts of new data that needs to be effectively managed. In this chapter, I’ll review some of the things that complicate the effort to ensure all this data is properly retained, protected and available when needed.
Ideally, you would like to have a single tool that does everything, across the entire enterprise, providing the ability to effectively respond following any type of event. While many vendors promise to solve your problems, nobody can provide this capability in a single package – the problem is just way too complex. But (tease), IBM is driving toward a unified recovery management capability that enables you to manage a selection of tools from a single administrative interface. More on this next week; first we need to ensure that we appreciate the complexity.
The first category is infrastructure – where is the data?
Your IT shop probably includes several if not many types of hardware: computer platforms such as x86, Power, RISC, Sparc, mainframes, etc. And there are a wide array of storage platforms, including direct-attached (DAS), network-attached (NAS), tape, and I’m sure many of you still have optical disks somewhere. And from many different vendors!
On these platforms, you’re going to have different operating systems: AIX, HP-UX, Linux, Solaris, VMware, Windows, z/OS. Then they’re going to be physically located in different places – data centers, staff offices, production facilities, remote/branch offices, disaster recovery sites, and warehouses.
Different types of networks, and the available bandwidth on them, further complicates the system. You have local-area (LAN), wide-area (WAN), storage-area (SAN) and metro-area (MAN) networks; additionally you may have cable networks running to some offices (particularly home offices), telecommunications networks that now carry data, and USB connections to some storage devices. And finally, you likely have important data being created and stored on user workstations.
How many tools do you use just to cover this level of complexity? But wait – there’s more! The next question is: who owns the data?
We also have to matrix in the different type of applications you have – general file systems; email, instant messaging and collaboration systems; databases such as B2, Oracle, SAP, SWL and mySQL; and your industry-specific mission-critical applications such as CAD/CAM, medical records management, software development, manufacturing resource planning (MRP), or customer relationship management (CRM).
Now consider that the data created and used by any of these applications may be on any hardware platform and operating system, in multiple locations, using a variety of networks. A lot of the data may be on user workstations as well. Oh my!
But there’s still more – what can go wrong?
As I noted in my last blog, lots of things can go wrong, any you really need a different response for each of them:
OK, now draw a line from every block on the diagram above to every other block and tell me what your backup and recovery plan is for every line – even in this simple diagram, there are 100 different scenarios, but when you consider all the variables, it may be millions. What tools would you use, who will use them, how long will it take to recover, and how much data will be lost? And what does it cost?
More on that next time!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
In response to: Storage Consolidation with SONAS and TSMRich,
I love how you used your winter/summer clothing exercise as an example of consolidating storage and the pros and cons of having a large storage container vs. multiple smaller containers. I also put my off season clothes in containers and should have started moving in the spring/summer attire and move out the winter... maybe next weekend.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm ibmpulse tivoli-storage-manager storage storage-blog bare-machine-recovery tsm recovery ibmstorage pulse pulse2010 2,060 Visits
While I was at Pulse 2010 in Las Vegas, I had the opportunity of Interviewing Scott Sterry from Cristie Software Limited. Cristie Bare Machine Recovery integrates with IBM Tivoli Storage Manager to provide a Bare Machine Recovery (BMR) solution for Windows®, Linux, SUN Solaris and HP-UX.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  virtualization tivoli resource-mgmt storage 1,200 Visits
In response to: Managing the tidal wave of data with IBM TivoliThanks for posting the white paper. For more infromation about Tivoli Storage visit the blog at http://www.ibm.com/blogs/tivolistorage
Rajendran Subramaniam 060000D5Y9 email@example.com Tags:  archive ibmpulse storage ibm pulse storage-blog information-archive ibmstorage pulse2010 1,424 Visits
In response to: Viral Friday - Pulse 2010 - Information Archive DemonstrationThanks for the video. Also here is a link to the Smart Archive http://www-01.ibm.com/software/data/smart-archive/?cm_sp=MTE9840
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage xiv ds5000 hardware storage-management ibmstorage storage-blog o systems-storage midmarket software ds8000 storage-software sap 2 Comments 4,196 Visits
In the second half of 2009 the International Technology Group (ITG) was contracted to do a detailed analysis of IBM and competitive storage offerings for SAP to determine a three year total cost of ownership (TCO) for each product included in the comparison. ITG developed two comparisons one for Large Enterprist accounts and a second for Midmarket accounts and chose approppriate competitive offerings for the comparisons. For the Large Enterprise accounts ITG includes: EMC V-Max systems vs. IBM DS8000 Systems and HP XP2400 vs. IBM XIV Systems. For the Midmarket accounts ITG includes: HP Enterprise Virtual Array (EVA) vs. IBM DS5000 Systems and HP EVA vs. IBM XIV Systems. ITG developed three year TCO comparisons and provided IBM an Executive summary and Detailed analysis report that can be share with customers.
Read the outcome of the analysis:
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Management Brief
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Management Brief
ITG also participated in a Webcast that is available for replay discussing the results of their studies of comparative disk systems cost for SAP environments in large and midsized organizations.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-protection tivoli unified-recovery-manageme... deduplication compliance archive business-continuity disaster-recovery data-reduction risk-management restore recovery service-management storage-blog retention backup 2,217 Visits
Unified Recovery Management #3: Recovery Considerations
Welcome back! In chapter 2, I probably scared you senseless with the incredible complexity that storage and backup administrators face in trying to manage data across a wide array of infrastructure and application types, adapting tools and processes to react to a wide array of things that can go wrong, all to ensure that the impacts on users and business operations are minimized.
In this chapter, I’ll attempt to put a little structure around how to cost-effectively address this daunting challenge. It’s all about policies that balance the needs of the business against the resources you have – money, people, infrastructure (or more simply, money!).
If you try to take a ‘one-size-fits-all’ approach to data protection and recovery management, you are either going to spend way too much money (putting the solvency of your organization at risk), or you are not going to meet the needs of the most critical business applications (putting competitiveness and long-term viability at risk).
So the answer is to apply the right technologies and policies to each application need. And yes, this will add another layer of complexity to the environment, but there isn’t much choice.
This diagram lists just some of the things you should consider when creating a recovery plan for each type of data, in each location, for each of the things that can reasonably go wrong.
The first one Recovery Point Objectives (RPO). This measures how much data you’re willing to risk, in terms of the time between backup operations. If you’re backing up a system once each night, you have an RPO of 24 hours, and all of the data created and changed in the 24 hours after the last backup is at risk. That’s obviously not good enough for many applications in many industries, but it is good enough for others.
The second consideration is Recovery Time Objective (RTO). This measures the amount of time it takes to recover from an event. Depending on the type and location of the event, RTO can include the time to determine what happened, deploy any needed hardware and other infrastructure, copy the needed data from the backup repository, recreate any lost data if possible (see RPO above), and reconnect your users and other systems. The longer the RTO, the longer the applicable systems may be down, so planning for a short RTO for the more critical applications is appropriate.
Next, you’ll probably need to consider the costs of the solution in terms of acquisition costs for the solution, plus labor, bandwidth, on-going services, etc. The key to a successful recovery plan is to balance these costs against the needs of the business – ensuring that you are delivering the appropriate levels of RPO and RTO at the lowest possible costs.
The last consideration is probably obvious to everyone, but you’re not going to want to deploy any recovery solution that negatively impacts business operations. For example, applying an aggressive RPO (frequent backups) to a critical application isn’t going to work if the recovery solution requires that you stop and close the application to perform the backup. The cure is not allowed to kill the patient.
So, what can you do? There are lots of choices and point solutions – from many vendors - to address each of the permutations that your plan may have, and I’ll cover many of them in my next blog. Then I’ll start looking at ways to tie all those technologies together to create a truly Unified Recovery Management platform.
Richard Vining 2700019R2A email@example.com Tags:  unified-recovery-manageme... backup archive data-reduction storage-blog deduplication restore recovery data-protection 1,614 Visits
Unified Recovery Management #4 – Technology Choices
In my last entry, I explored some of the considerations that you should include in an overall data protection and recovery strategy, including matching the value of the data being protected to service level expectations such as Recovery Point Objectives (RPO), Recovery Time Objectives (RTO) and overall costs.
Today I’ll cover some of the many technology choices that are available to help you meet your objectives. As in previous installments in this blog, this adds another layer of complexity to the program – which technology do you use to meet which need? And at the end of the day how many different tools can you really manage effectively to meet the complex challenges of protecting your data?
Tape-based backup is probably still the most widely-used backup method in corporate and government environments. The challenges with tape have been well documented – lots of manual processes that can lead to errors and recovery problems; poor RPO and RTO performance; and the physical movement of tape cartridges that can create data security risks. For these reasons, many organizations have moved to a blend of disk and tape for backup, enabling faster and more frequent backups, and faster restores from disk, while moving backup data to tape in the background for longer-term retention.
Mirroring and replication are good technologies for system-level recovery and fail-over. However they can leave you with a big hole in your recovery strategy – the loss or corruption of individual files - since any loss will be immediately replicated to the backup system, leaving you with 2 bad copies of your data.
Continuous Data Protection, or CDP, takes the benefits of replication and adds in point-in-time recovery options. The problem with CDP is cost – it requires far more storage capacity than other solutions, and can strain network bandwidth as well.
All three of the above technologies are also susceptible to being unable to recover any files that are open at the time of the data loss incident, such as a system crash – although there are utilities available to help mitigate this issue.
Snapshots fix the open file issue by creating application-consistent time-based recovery points. It is necessary to pause or “quiesce” the application for a very short period of time to accomplish a snapshot, but it’s far faster than a tape backup because it only takes incremental changes over a much shorter period of time. Snapshots can be run very frequently, often many times per hour, to meet aggressive Recovery Point Objectives (RPO). Hardware-based snapshot technologies are not always ‘application-aware’, so the consistency (ability to fully recover) of open files can be a problem.
Disaster Recovery and Business Continuity (DR/BC) services are key focus for many organizations, especially given the growing number of natural and man-made threats to normal operations. Some companies handle it themselves, others contract it out. Either way, you’ll need to balance overall costs against benefits, matching the needs of individual applications and locations to the service levels provided.
Data deduplication is a much hyped technology that, depending on where it is applied, can reduce the amount of data that needs to be backed up and sent over the network, or reduce the amount of backup capacity required, or both. Most of the gains claimed by deduplication are in environments that perform weekly full backups that cause an enormous amount of duplication. Check out my blog series on data reduction to learn more about this important topic.
Virtual Tape is a relatively new entrant to the market, combining the best efficiencies of disk and tape, and adding in data deduplication to help meet cost per capacity goals. As a backend repository, a virtual tape library (VTL) does not replace data capture technologies such as backup, replication or archive, but can be an effective complement to them.
I added Reporting to the diagram above, only because you’ll want to have visibility into the functionality and performance of your data protection and recovery environment n order to provide the assurance that your strategy is effective and meeting the needs of the business.
In my next blog, I’ll start to show how IBM can provide encouraging answers to these questions.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-software tsm ibmstorage tivoli silverstring storage business-partner tivoli-storage-manager bp partners 3 Comments 3,218 Visits
During Pulse 2010 in Las Vegas, I interviewed Alistair Mackenzie from Silverstring, an IBM Business Partner. Just last week Silverstring launched TSMagic; helping you understand your TSM estate like never before... See the news article for more information on TSMagic
Checkout the live video interview with Alistair:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-software storage-blog ibmstorage midrange storage-management tivoli-storage tpc ibmsoftware disk 2,631 Visits
IBM Tivoli Storage Productivity Center (TPC) for Disk Midrange Edition V4.1 is now Available! Announced last month, TPC for Disk Midrange Edition has been designed to help reduce the complexity of managing midrange SAN environments that include IBM System Storage DS3000, DS4000, DS5000, SAN Volume Controller (SVC) Entry Edition and IBM Virtual Disk System devices by allowing administrators to configure, manage, and monitor performance of their entire midrange storage infrastructure from a single console. This new offering provides customers the equivalent features and functions of Tivoli Storage Productivity Center for Disk enterprise offering at a fraction of the cost... up to 80% off list price.
TPC for Disk Midrange Edition is part of the IBM Tivoli Storage Productivity Center V4.1 suite of integrated storage infrastructure management products that are designed to help you manage almost every point of your storage network, between the hosts through the fabric and to the physical disks in a multi-site enterprise. It can help simplify and automate the management of storage data and the networks to which they connect.
Utilizing a new Storage Management Initiative - Specification (SMI-S) Common Information Model (CIM) agent, Tivoli Storage Productivity Center for Disk Midrange Edition can provide over 40 difference reports and performance metrics including:
Administrators can monitor and analyze performance statistics for these storage systems down to five minute intervals. The performance data can be viewed in real time in the topology viewer, stored for historical reporting, or used to generate timely alerts by monitoring thresholds for various device parameters.
Tivoli Storage Productivity Center for Disk Midrange Edition is set apart from IBM Tivoli Storage Productivity Center for Disk because it is:
To learn more, visit the TPC for Disk Midrange Edition Web page and for more information on the IBM Tivoli Storage Productivity Center suite of products, read the data sheet
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  storage tsm tivoli storage-blog snapshot storage-management storage-software flashcopy 2,515 Visits
In December of last year, I blogged about IBM Tivoli Storage FlashCopy Manager for Windows version 2.1. I talked about how FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows supports Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS) and how it integrates into your enterprise whether you have Tivoli Storage Manager or not. So, if you haven't read my previous blog about FlashCopy Manager on Windows, why not check that out first, then come back to learn more about the new features we just announced!
This Friday, June 4, 2010, IBM will release IBM Tivoli Storage FlashCopy Manager Version 2.2. Some of the exciting new Windows features in this release include:
Did you know? FlashCopy Manager also supports UNIX platforms! Some of the exciting new UNIX features included in FlashCopy Manager Version 2.2 are:
For more details, read the IBM Tivoli Storage FlashCopy Manager Version 2.2 announcement.
General information about IBM Tivoli Storage FlashCopy Manager is located here.
Delbert Hoobler 1000008PR6 email@example.com Tags:  tivoli storage-blog storage-software storage storage-management tsm flashcopy snapshot exchange 2 Comments 4,496 Visits
IBM just announced that Tivoli Storage Manager for Mail - Data Protection for Exchange 6.1.2 and IBM Tivoli Storage FlashCopy Manager 2.2 now support Microsoft Exchange Server 2010! For more details, read the FlashCopy Manager Version 2.2 announcement or see my blog from yesterday.
There are a few important things to take note of. Microsoft Exchange Server 2010 included some significant changes, a number of which affect backup and restore. For example, under Exchange Server 2010:
With the release of Data Protection for Exchange version 6.1.2 and IBM Tivoli Storage FlashCopy Manager version 2.2 on June 4, 2010, we have implemented support for these changes. Here are details about the TSM functionality for Exchange Server 2010 that will be available on June 4, 2010:
Note: VSS backups to the TSM Server are enabled without the requirement for a TSM for Copy Services or FlashCopy Manager license.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmstorage storage-blog tivoli data-protection data-availability storage healthcare ibm virtualization 2,073 Visits
Working with IBM, a hospital in Asia Pacific gained a data protection solution that meets users' data availability requirements,
scales on demand to support a growing warehouse of patient data and medical images, and simplifies data migration and
data recovery tasks.
The benefits of the solution include a 50% reduction in backup window; restores individual Microsoft Exchange objects in minutes;
restores systems in under 10 minutes.
Read the complete case study to see how this Asian Pacific hospital gained peace of mind with virtualixed data protection from IBM.
More success stories of other customer implementations of IBM technologies can be found here
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  exchange storage-software flashcopy tivoli storage-blog storage tsm snapshot storage-management 9 Comments 12,960 Visits
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database.
DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
IBM Tivoli Storage Manager for Mail : Data Protection for Exchange and IBM Tivoli Storage FlashCopy Manager completely support backing up DAG passive database copies. Data Protection for Exchange and FlashCopy Manager also support using those backups to recover the production database as well as for recovering individual mailboxes and items. You can find more details in the IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange Server Installation and User's Guide V6.1.2.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy: