IBM Systems Storage Software Blog
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  backup deduplication storage-blog space-managment archive hsm data-management data-reduction 2,582 Visits
Data Reduction Chapter 4: Categorize your data for migration & deletion
In the last chapter, we discussed eliminating the one of biggest causes of data growth the duplication of large amounts of data every time you perform a full backup. In this chapter, well explore the benefits of determining what different types of data you have and categorizing it so that you can manage it most effectively. This will help you set up policies to migrate of less frequently-accessed data to lower-cost tiers of storage, and to delete the data that you no longer need or want. By cleaning out your production storage, you will shorten your backup cycles, and improve application performance.
The next option for reducing the data storage footprint is to assess the different types of data and where they are in the data life cycle. If your organization is like most, you have all your unstructured data in flat file systems, which are probably full of data that you rarely, if ever, need to access. This may include data you are no longer required by law or policy to keep, but that you havent deletedsuch as old e-mails and memosthat could prove costly if discovered in legal proceedings.
The goal is to identify what data can be moved to less expensive tiers of storage, and what data can be deleted entirely from the environment. This will reduce the need to buy more primary storage capacity and make it easier to manage and protect what you have. Backup and restore performance will improve, and it will be easier to prove that you are meeting data retention and expiration policies.
IBM offers IBM Tivoli Storage Productivity Center for Data for this purpose. This solution reports on where your data is, sorted by access or saved dates, who owns it, the application that created it, and numerous other filters. From the intelligence you gain from these reports, you can set meaningful policies in your data management software to automatically take the appropriate action on data that shouldnt be clogging up your primary systems. Tivoli Storage Productivity Center for Data can also help identify and eliminate duplicate data, orphan data, temporary data and non-business data.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 5, where well talk about automating the migration, archival and expiration of your data.
Richard Vining 2700019R2A email@example.com Tags:  data-management storage-blog backup data-reduction deduplication hsm space-managment archive 4,098 Visits
Data Reduction Chapter 3: Avoiding data duplication
Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. (Its no wonder that data deduplication products are so popular; they were designed to eliminate all of this duplicate data. And when they claim to reduce your backup storage footprint by 90 percent or more, this is exactly the data that theyre talking about.)
But what if you never had to perform a full backup again after the initial one? If you backed up only the new and changed data always you wouldnt be creating all that duplicate data that needs an expensive deduplication solution to undo. Shorter backup windows, less storage required, and reduced storage acquisition costs would all be benefits of eliminating that weekly full backup. So would faster restore times, since deduplicated data wouldnt need to be re-hydrated in order to be useful.
IBM has smarter solutions that can help prevent the need to perform full backups. The products in the IBM Tivoli® Storage Manager portfolio of recovery management solutions all provide incremental-forever backups.
These are the common backup methodologies and how they compare on backup and restore processing:
Full + incremental
Backup This requires a full backup and then incremental backups over time usually a full backup each weekend with incremental backups for the following six days. Only data that has changed from the day before is transferred to tape. Then at the end of the week another full backup must be run.
Restore The full backup must be restored, then each days incremental data applied to it. This means that if you have a full backup and three incremental backups of the same file, it will be restored 4 times. It is a waste of time and money, and introduces risk.
Full + differential
Backup This requires a full backup and then differential backups over time usually a full backup each weekend with differential backups for the following six days. This means that all data that has changed since the last full backup will be backed up. If you assume a 10 percent daily change rate, then you will backup 100 percent (full) on the first day, 10 percent on the second, 20 percent on the third, 30 percent on the fourth, 40 percent on the fifth, 50 percent on the sixth, and 60 percent on the seventh. That means that you are backing up 260 percent of your data every week! Youll need 10 times your production capacity for just a month of backups.
Restore You would restore the full backup and then the last differential up to the date you were restoring to. This is faster and more reliable than the Full + Incremental model, but at the cost of much more storage capacity.
Backup This requires a full backup the first time you back up, and then only incremental backups. There are no extra transfers of data, which saves network bandwidth and transfer time, makes backup and restore faster, and can save thousands of dollars in disk and tape costs.
Restore You select the point-in-time that you want to restore from, and then restore the necessary files just once. This is much faster than with the other two methods.
The analysis shown in the figure above starts with 2TB of data and adds or changes 200GB per day. The assumption is that a full backup has already been performed to set the base.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 4chapter 4, where well cover the discovery and categorization of data to help move it intelligently throughout its lifecycle.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management archive data-reduction backup deduplication hsm space-managment 2,341 Visits
Data Reduction Chapter 5 - Automated Data Migration
In previous chapters, we’ve talked about the need to reduce your data storage footprint in order to help survive the tidal wave of data, and the first steps in doing so include eliminating unnecessary duplication of data, and then categorizing your data so you can make smarter decisions on where to store it, and for how long.
In this chapter, we take the next step by automating these data management policies through three distinct processes: migration, archival, and expiration. The net result of these processes is to remove unneeded data from your production storage systems, which will reduce or delay your need to acquire more expensive hardware and reduce administrative costs, all without impacting key operational processes.
In the old days of computing and storage management, the concept of transparently moving data from one tier of storage to another was called hierarchical storage management, or HSM. Given IBM’s heritage in mainframes, we still use that term today. More recently, this concept morphed into Information Lifecycle Management (ILM), but it’s the same basic principle – move older, less-frequently accessed data off your most expensive storage devices onto slower, less costly storage media.
HSM and ILM solutions work transparently in the background, automatically selecting and moving files from primary to secondary tiers of storage based on the policy criteria that you set, such as file size or length of time since a file has been opened. They leave a pointer, or stub file, where the data was originally stored so that users and applications don’t need to worry about where the data was moved; the software transparently reroutes the request for any moved files. These solutions automatically move data to the proper media based upon policies you set, freeing up valuable disk space for active files and providing automated access to the migrated files when needed.
Data migration solutions help customers get control of, and efficiently manage, data growth and its associated storage costs by providing automated space management. These solutions should provide the following key features:
• Storage pool “virtualization” helps maximize utilization of the managed storage resources.
• Restore management is optimized based on the location of the data in the hierarchy.
• Migration is transparent to the users and to applications.
• Migrations are scheduled to minimize network traffic during peak hours.
• Automatic migrations occur outside the backup window.
• By setting proper threshold limits, annoying ‘out of disk space’ messages can be eliminated.
The IBM Tivoli Storage Manager (TSM) family includes two solutions for automating the migration of data between multiple tiers of storage. TSM 6 for Space Management is for AIX, HP-UX, Solaris and Linux data, while TSM HSM for Windows is for Windows servers.
Tivoli Storage Manager data migration solutions not only help you clean up your primary storage systems to help them run more efficiently, they can also be used to easily move data to new storage technologies as they are deployed. Migrating files to Tivoli Storage Manager also helps expedite restores, because there is no need to restore migrated files in the event of a disaster.
The benefits of Hierarchical Storage Management or Information Lifecycle Management include:
• Improve response times of file servers by off-loading inactive data
• Slow or even stop the growth of your production storage environment
• Use existing storage assets more efficiently
• Reduce backup times and resource usage by focusing on active files only
• Eliminate manual file system clean-up activities
In the next chapter, we’ll look at HSM’s big brother – archiving.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Shawn Jaques 1200007FSY email@example.com Tags:  green-it storage-blog storage-management storage energy-effeciency green storage-software energy 2,458 Visits
Living in Boulder, Colorado, I am constantly hearing about "green" initiatives such as recycling, composting, alternative transportation, etc. Over the past several years, my family has been doing a much better job of lessening our impact on the Earth through things such as recycling, buying environmentally friendly products and even signing up for energy saving smart grid technology.
I appreciate when corporations also do their part to reduce their environmental impact by leveraging greener technologies. But let's face it, most corporations act based on the impact to the bottom line (both real or perceived) rather than the impact to the environment. Companies like IBM can make the decisions easier for clients by building products that improve performance while reducing energy or other environmental impacts.
I'm proud when IBM delivers "green" technology and thus wanted to point your attention to this video about energy efficient storage. Craig Smelser, VP of Security and Storage Development at IBM Tivoli, introduces some of the storage challenges that can be addressed with energy efficient IBM storage software solutions.
For more information, click here
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  hsm space-managment storage-blog backup archive data-management deduplication data-reduction 2,646 Visits
Data Reduction Chapter 6 - Archiving
I’m back with the next installment on ideas for helping you to reduce the amount of storage capacity you need for an ever-increasing amount of data, and the amount of time you spend managing it. The last chapter covered transparently automating the migration of data from primary storage to secondary systems. An extension of this thought is archiving.
Archiving is another important data reduction technique for certain types of data. One example of this would be financial reporting data (such as weekly, monthly, quarterly, annual data), that needs to be retained for future trending, requirements or auditing, but does not need to consume valuable disk space where live data should reside. Historical medical records and customer statements also often fit into this category.
Archiving is for long-term record retention. It differs from backup in that it keeps files for a specific amount of time (where backup keeps a certain number of versions of a file) while removing the data from the primary production storage systems completely.
Key features of IBM archiving solutions include:
Using IBM archiving solutions for records retention can help you:
IBM offers a choice of solutions for archiving, depending on customer preferences and the applications involved.
Tivoli Storage Manager 6 includes an archiving capability directly integrated into its client backup software. It is policy based, allowing the administrator to set retention times. If the requirement for how long a file must be retained changes, all the administrator has to do is update the policy, and the solution will retroactively update the already archived files; there is no need to restore and re-archive, as some competitive offerings require. Tivoli Storage Manager also offers the option of integrating data from many different applications into your archive repository, and the archive repository can be a virtualized pool of heterogeneous storage systems.
IBM Information Archive, which contains a specialized version of Tivoli Storage Manager called IBM System Storage™ Archive Manager, is a standalone archive appliance that ingests data directly from more than 40 applications including messaging, healthcare and medical imaging, design and engineering, document management, and others.
Database archiving with IBM Optim and Tivoli Storage Manager
IBM Optim™ Data Growth Solution is a unique database archiving solution that transparently migrates unneeded records from database tables to secondary storage. Like Tivoli Storage Manager’s space management and archive solutions, Optim provides database and storage administrators with a range of cost and performance benefits.
There are also benefits to using Tivoli Storage Manager in conjunction with Optim, which works seamlessly with Tivoli Storage Manager’s application program interface (API) to move archived database records directly into Tivoli Storage Manager’s storage hierarchy.
Optim can also be used with other file-based backup/restore products; however, this involves a two-step process to first archive the data and then back it up. When used with Tivoli Storage Manager, Optim automatically archives database records and then uses the API to store/archive data in a Tivoli Storage Manager storage pool hierarchy. With any other file-based backup/restore product, Optim uses standard file operations to store/archive data in a disk-based file system, and then the backup product can backup the file to supported backup media.
Using Optim and Tivoli Storage Manager together allows you to:
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 7, where we’ll talk about data deduplication and compression as the next options in an effective, holistic approach to reducing your overall data storage footprint.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Delbert Hoobler 1000008PR6 email@example.com Tags:  tsm tivuser data-management software storage-blog storage fcm backup 2,219 Visits
Come join me for "Ask the Experts online Jam"!
What is the "Ask the Experts online Jam"?
The "Ask the Experts Online Jam" is a valuable opportunity for the YOU to connect with 75+ real world IBM experts on 30+ Tivoli products. These experts, many from IBM development, are recruited to answer your questions for a concentrated period of 12 hours. (8am eastern - 8pm eastern USA)
When is the "Ask the Experts online Jam"?
November 12th 2009 - 8AM - 8PM Eastern USA. To find the time in your city check out the World Clock meeting planner website.
Here's how it works in brief:
Step 1: You have a question - usually fairly technical;
Step 2: You find the expert that is best suited to answer the question by browsing for an expert by pre-defined category and product specific;
Step 3: You fill in a field on the "Ask the Experts online Jam" web application to submit the question.
Step 4: You receive an email answer to you question(s) and the Ask the Expert JAM web application is updated for other members to see.
Ask questions to over 75+ IBM experts on the following 30+ topics:
Datacenter Management tools: IBM Tivoli Monitoring, IBM Tivoli Composite Application Manager for Transactions and WebSphere/J2EE, Tivoli Application Dependency Discovery Manager, Tivoli Provisioning Manager, Tivoli Service Request Manager,
Network, Service Assurance and Events: Tivoli Netcool Impact, Tivoli Netcool Performance Flow Analyzer, Tivoli Netcool Performance Manager, Tivoli Netcool/OMNIbus, Tivoli network Manager, Tivoli Network Manager (Precision and NetView/d),
Asset Management: Asset Management for IT and Enterprise, Enterprise Asset Management Trends and IBM Maximo Industry Solutions,
Security: Tivoli Access Manager, Tivoli Identity Manager, Tivoli Federated Identity Manager, Tivoli Enterprise Acces Manager Single Sign On, Tivoli Compliance Insight Manager, Tivoli Directory Server, Tivoli Key Lifecycle Manager, Tivoli Security Information and Event Manager, Tivoli Security Policy Manager,
Storage: Tivoli Storage Flash Copy Manager on AIX and Windows, Tivoli Storage Manager, Tivoli Storage Productivity Center, Tivoli Storage Mangaer (TSM) Fastback,
z/OS: Netview for z/OS, OMEGAMON, Tivoli Security for Systems z: Tivoli zSecure Suite
Click here for more information.
I personally will be available from 8am to 2pm covering IBM Tivoli Storage FlashCopy Manager on Windows but there will also be many other storage experts available for the entire 12 hours. Please join us!