IBM Systems Storage Software Blog
Milan Patel 060001K86W email@example.com Tags:  information-infrastructur... data-management backup recovery storage-blog archiving 1,730 Visits
Get ready for Pulse 2010, February 21-24 at the MGM Grand Hotel in Las Vegas. Pulse 2010 will be one of the most important storage and service management conference of the year, and one that will deliver the information you need to hear directly from your peers, our partners and your IBM Storage team. The conference will include an impressive storage management agenda covering everything from emerging storage technologies, architectures, back and recovery to archiving, and managing storage in virtualized data centers and server environments. Once again we are very excited to have your peers share best practices from multiple industries, geographies and companies of various sizes.
As your business and data centers continue to evolve, we continue to evolve and adapt our storage and information infrastructure management solutions to meet your growing needs and facilitate your journey to a dynamic storage infrastructure with innovative products and services that matter to your bottom line. Pulse 2010 provides us the opportunity to showcase our commitment to you, and you will see first hand how IBM's increased investment in Storage development has produced an aggressive and exciting roadmap that will expand and enhance our capabilities.
Detailed communications on the hotel and Call for Presentations will be coming your way shortly. The key to successful event is your participation and we hope you play an active role in the agenda. Please visit Pulse 2010 website for more details.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog ibmtivoli ibmstorage information-infrastructur... pulse2010 storage tivoli dynamic-infrastructure pulse service-management ibm 2 Comments 3,251 Visits
In preparation for Pulse 2010 in Vegas, I interviewed John Connor, the Pulse track lead for Storage and Information Infrastructure, to help you generate good ideas for submitting your call for speaker abstracts for Pulse. John will actually be reviewing the submissions with a team of other folks, so here is some advice that you can leverage to increase your chances of being accepted to speak at Pulse.
Me: What are the hot topics in the area of storage and information infrastructure today?
John: The hot topics in the area of storage and information infrastructure today are how, in today's tight economy, customers are leveraging storage in their information infrastructure to improve scalability, addressing the performance of their storage management assets, cutting capital expenditures by reducing duplicate data to lower storage capacity needs and simplifying the overall management of their storage infrastructure.
Me: Which topics would you like to see presented at pulse are
John: Ideally I would like to see sessions at Pulse that highlight customer success stories, how Tivoli storage management and/or IBM storage solutions helped customers address the challenges we discussed above.
Me: Who are good candidates for submitting abstracts and why?
John: The best candidates to talk about these successes are the folks who implemented them, which would be our customers. Customers are able to discuss their return on investment and how the IBM storage solutions are benefiting them in their everyday business operations. Another good candidate would be our business partners, accompanying and co-presenting with their clients on the IBM storage solutions they've implemented.
Me: What are you looking for in a good proposal?
John: As I mentioned earlier about the topics I would like to see presented, a good proposal is a customer success story around IBM storage solutions, including Tivoli storage management software, and/or storage hardware and storage services. This proposal should describe the initial pain points or problems that existed, how our solutions helped and the lessons learned that could be applied to other customer situations. This type of proposal and session at Pulse will help others learn from each other.
Me: What are the benefits of submitting an abstract for Pulse?
John: Submitting your abstract is a great way to gain visibility for your work, and your particular solution. Customers that submit abstracts and that are selected will receive a complimentary pass to attend Pulse at no charge ($1,995 value) and admission to on site VIP client lounge. Attending Pulse is not only a great way to share your companys success by implementing IBM storage solutions, but it is also a great education and networking opportunity.
Me: What is the deadline for submitting call for speaker abstracts?
John: The deadline to submit your abstract is Nov. 20th. Dont delay, submit your proposal today.
With such great guidance from John, youre sure to write a perfect proposal. If you have any questions on submitting abstracts for Pulse or want feedback on an idea, just leave a blog comment. Also, be sure to check out this justification letter if you need that extra edge to convince your boss of the value of attending Pulse. I hope to see you there!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm storage-software storage ibmstorage tivoli ibmtivoli storage-blog storage-management 1 Comment 6,398 Visits
Welcome to the Tivoli Storage blog.
We have gathered a team of SMEs from various areas of the business to discuss a variety of topics, spanning different interest areas including customer success stories, upcoming events, Business Partner spotlights, technical tips and tricks, product strategy, roadmaps and hot topics -- and of course, topics of interest to you!
Introducing the team!
BJ Klingenberg: Senior Technical Staff Member - Storage Software, IBM Software Group
BJ has over 25 years of storage software strategy and development experience. He has held various technical and management positions, nearly all of which have been related to storage software. His experience in Enterprise storage management includes DFSMS, DFSMShsm, DFSMSdss, and also Tivoli Storage Manager, Tivoli Storage Productivity Center (TPC) as well as System Storage SAN Volume Controler (SVC). He has also been involved in projects which apply ITIL management best practices to Enterprise Storage Management. BJ is currently focusing on storage archiving solutions. BJ is a graduate of the University of Illinois Urbana/Champaign where he received a Bachelor of Science degree in Computer Science, and holds a Master of Science Degree in Computer Science from the University of Arizona
Dave Rice: Business Partner Marketing, Tivoli Storage Software
Dave currently works in IBMs Worldwide Software Group where he drives Business Partner Marketing for Tivoli storage software and also has a focus on Asia Pacific and Japan geographies. In this role, Dave influences Business Partner sales pipeline through, lead/pipeline analysis, progression activities, partner communications, and implementing programs that provide Business Partner Opportunity Identification. Dave has been in a broad set of storage software marketing roles for the past 13 years, and has 35 years with IBM. Outside of IBM, Dave's interests include astronomy, as well as home and life improvement projects.
Del Hoobler: Senior Software Engineer
Del is a Senior Software Engineer that has worked for IBM for over 20 years in software design, development and services. For the past 13 years, he has worked on designing and developing software products for the IBM Tivoli Storage Manager (TSM) suite of products. Most recently, Del was the technical development lead for the TSM Windows snapshot (VSS) support for Microsoft Exchange Server and Microsoft SQL Server. Del enjoys working with people and helping solve their complicated IT problems.
Devon Helms is currently an intern with the IBM Tivoli Software group and a second year MBA candidate at the Paul Merage School of Business at UC Irvine. His studies are focus on business strategy and corporate finance. Before returning to the academic world to pursue his MBA, Devon was a business operations and technology consultant. He has been involved in hundreds of engagements, analyzing and improving his customers business processes. After his studies are complete, Devon wants to continue to help clients improve the performance of their businesses through business process and financial analysis. In his free time, Devon is an avid marathon runner, rock climber, and SCUBA diver. Devon lives in Lakewood, CA with his lovely wife, Shana and his 8 year old Siberian Husky and faithful running partner, Frosty.
Greg Tevis: Tivoli Storage Technical Strategist
Greg has over 27 years in IBM storage hardware and software development. He worked in ADSM/TSM architecture and technical support in the 1990s and was one of the original architects of IBM's storage resource management solution, Tivoli Storage Productivity Center (TPC). He currently has responsibility for technology strategy for all Tivoli Storage and was involved in all of the recent IBM Storage acquisitions including XIV, Diligent, FilesX, Novus Consulting, and Arsenal Digital.
Jason has been the product manager for the Tivoli Storage Productivity Center (TPC) family since joining IBM in 2006. Prior to joining IBM, Jason was a product manager at EMC and Prisa Networks, responsible for the road map and strategy of various storage management offerings. When not helping define the direction for TPC, Jason acts as the President for Classic Soccer Club, a youth soccer club where his son currently plays.
John Connor: Product Manager
John is the Product Manager for IBMs flagship data protection and recovery offerings, the Tivoli Storage Manager family. During Johns tenure as product manager, TSM has experienced strong growth; growing faster than the overall market, and gaining market share. Prior to joining the Tivoli Storage Manager team in 2005, John helped drive the business strategy for IBM Retail Store Solutions. Prior to that, John had product and marketing roles in various IBM software businesses including WebSphere and networking software. John has an MBA from Duke University and an undergraduate degree in electrical engineering from Manhattan College. In his spare time, John enjoys competing in triathlons and has successfully completed an Ironman triathlon.
John R. Foley Jr.: Product Marketing Manager
John is currently a marketing manager within IBM's Tivoli storage software marketing team. John has over 20 years of experience in the areas of storage hardware, storage software and system networking. He has held positions in management, product line management, strategy, business development and marketing. In the past 10 years, he has served on multiple storage projects including SAN storage (fibre channel & iSCSI), Network Attached Storage (NAS) and fibre channel switch offerings. Most recent projects include the introduction of IBM's System Storage N series portfolio stemming from the NetApp OEM agreement and the release to market of IBM's newly introduced Tivoli Storage Productivity Center Version 4 and IBM Information Archive Version 1.
Kelly Beavers: IBM Storage Software Business Line Executive
Kelly joined the IBM Storage Software team in 2004 as Director of Strategy and Product Management for Storage Software and Solutions. Her team is responsible for guiding the development and release of products that capitalize on market/technology trends, and for defining and executing tactical go-to-market plans for IBM storage software solutions across both the Tivoli and Systems Storage brands. Kelly has 28 years with IBM where she's held a variety of roles including Finance, Pricing, Tivoli Channel Development, Director of Customer Insight, managing Market Intelligence, Customer Relations and Marketing Operations. Kelly is married with two daughters, ages 19 and 12.
Matt Anglin: Tivoli Storage Manager Development
Matt has been a member of the Tivoli Storage Manager Server Development Team for 15 years. His areas of expertise include data movement to and within the server, deduplication, shredding, and DB2 interactions. He is the AIX platform export in TSM, and is knowledgeable about other Unix, Linux, and Windows plaforms. Matt lives in Tucson, Arizona.
Matthew Geiser: Manager, Storage Software Product Management
Matt joined IBM in 2001 and has worked in product management and product development for Storage Software offerings including SAN Volume Controller, Tivoli Productivity Center, Tivoli Storage Manager and IBM Information Archive. Matt's current responsibilities include managing the product management team for the storage infrastructure management offerings. Prior to IBM, Matt worked in a variety of operations, project management and software development roles in the banking and energy industries.
Milan Patel: Senior Product Marketing Manager
Milan is responsible for Product Marketing of IBM storage software for virtualized server environments, storage clouds and of course every day issues in storage management like backup, recovery, archiving and replication. Milan has been with IBM for over 6 years working in server and storage systems and storage software marketing groups. Prior to that, Milan spent 13 years in various capacities from development to product management of various server subsystems and systems management.
Richard Vining: Product Marketing Manager
Rich is the Product Marketing Manager responsible for the IBM Tivoli Storage Manager portfolio of products. Rich joined IBM in April 2008 as part of the acquisition of FilesX, where he served as Director of Marketing. Rich has more than 20 years of experience in the data storage industry, holding senior management roles in marketing, alliances, customer support and product management at a number of leading edge companies, including Signiant, OTG Software, Plasmon and Cygnet. Rich enjoys eating, drinking, travelling and golfing (but doesn't everybody?)
Rodney Fannin: Worldwide Channel Manager, Tivoli Storage Software
Rodney has over 15 years of experience in working with Business Partners. Primary responsibilities include refining the channel strategy for Storage software and developing sales and marketing tactics to increase reseller revenue worldwide. Rodney is also a contributing author for the BP Spotlight on our blog.
Roger Wofford: Product Manager
Roger is currently a Product Manager in Tivoli Storage Software. He has experience in Manufacturing, Development, Marketing and Sales within IBM. He enjoys golf, swimming and the Rocky Mountains. Roger plans to blog about how customers use archiving solutions in their storage environments.
Ron Riffe: IBM Storage Software Business Strategist
Ron is currently the business strategist for IBM Storage Software. During the last six years, Ron has been devising and implementing IBM's storage software strategy with a focus on creating greater client value through integrating IBM storage software and storage hardware offerings. Ron has managed storage systems and storage management software for more than 23 years, holding positions in senior management, product line management, strategy and business development for both IBM System Storage and IBM Tivoli Storage. Ron has written papers on the synergies of storage automation and virtualization and frequently speaks at conferences and customer locations on the subject of storage software. Prior to joining IBM, Ron spent 10 years as a corporate storage manager for international manufacturing firm Texas Instruments after receiving a B.S. in Computer Science from Texas A&M University.
Shawn Jaques: Manager, IBM Tivoli Storage Product Management
Shawn has been in his current role as manager of storage software product management for nearly three years. The team is responsible for product strategy, content, positioning and pricing of IBM storage software solutions. Prior, Shawn had product and market management roles in other Tivoli product areas as well as a stint in Tivoli Strategy. Before joining IBM, Shawn was a Consulting Manager at Cap Gemini consulting and an Audit Manager at KPMG. Shawn has a Master of Business Administration from The University of Texas at Austin and a Bachelor of Science from the University of Montana. He lives in Boulder, Colorado and enjoys fly-fishing, skiing and hiking with his wife and kids.
Terese Knicky: Analyst Relations Tivoli
Terese is with Tivoli's analyst relation team covering Storage, System z, Job Scheduling and IBM's General Enterprise solutions. Terese was born and raised in Omaha, NE and transplanted to Texas where she enjoys watching her two boys play college football.
And finally, let's talk about me. I'm Tiffeni Woodhams and I have been with IBM for nearly seven years. Currently, I am a Tivoli Storage Marketing Manager where I am responsible for general marketing activities, ranging from pipeline measurement and tracking, providing marketing execution guidance and communications to the geography teams; Tivoli Storage Social Media lead and co-lead for IBM Storage Social computing strategy. I also work on major launches like Dynamic Infrastructure and Information Infrastructure providing the storage messaging and linkages. Prior to this role, I have held several other marketing positions including Tivoli Provisioning Go-to-Market Manager, Benelux Software Marketing Manager focusing on Tivoli, WebSphere, and Lotus, Americas Tivoli Marketing Manager, and Tivoli Launch Strategist. In my spare time, I enjoy playing sports (basketball, softball, and golf), coaching JV girls basketball, riding horses, and spending time with family and friends.
Now that you know a little background on each of the team members, we hope that you will let us know some of your interest areas when it comes to IBM Storage and IBM Tivoli Storage Software solutions. Please post comments to this blog and let us know what you want to hear about.
Some topics we will be discussing in the next month include:
Pulse 2010, the Premier Service Management Event
Data Reduction - the steps to get to where you want to be
Archiving - why you need to do it
Unified Recovery Mangement
New Product announcements and roadmaps.
Thanks and we look forward to hearing your feedback.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management space-managment data-reduction hsm deduplication backup archive 3,375 Visits
Data Reduction Chapter 1: The challenges posed by the tidal wave of data
We're storing and using more data than ever before. The volume of data is growing exponentially, government regulations are expanding and competitive pressures are increasing forcing us to retain more of our data for longer periods of time. But our budgets are flat or being cut. And as we become more dependent on digital information, the costs of losing any of it are increasingly painful. The bottom line, of course, is that we need to do a better job of managing our data assets, and as these assets grow and our budgets shrink, we need to do more with less. So we need smarter solutions.
Storage administrators are on the front lines of the Tidal Wave of Data battle. Some of the challenges from data growth that administrators are struggling with include:
- It takes longer to perform backups; often not completing within backup window allowances; some data is not being adequately protectedIBM can help you build a dynamic storage management infrastructure that will enable you to cope with all of these challenges. We have solutions to help reduce your data storage footprint, and the goals that we set out in these solutions are: to reduce your capital and operational costs; to improve your application availability and service levels; and to help you mitigate the risks associated with losing data and a rapidly changing environment.
With these solutions you should: need less storage; have less data to manage; experience less downtime; and be more competitive. To learn more, please visit the Data Reduction Solutions web page and stay tuned for Chapter 2, where we will outline a holistic and comprehensive approach to data reduction.
Richard Vining 2700019R2A email@example.com Tags:  archive storage-blog data-management deduplication hsm backup data-reduction space-managment 2,599 Visits
Data Reduction Chapter 2: Surviving the tidal wave of data - options for data reduction
In chapter 1, we discussed the struggles that storage administrators are having with the tidal wave of data. In this chapter, well begin talking about how data reduction technologies can help you survive and even thrive in the face of these challenges.
IBM takes a holistic approach to data reduction, unlike competitors that offer point solutions to problems that they may in fact be causing. For example, a huge contributor to data growth is the repeated duplication of large amounts of data every time you perform a full backup.
So, one option is to avoid data growth from unnecessary data duplication, by only backing up data that has changed since the last backup. This addresses the cause of the problem, not the symptom. For example, if you have a 5 percent per week data change rate, 95 percent of your data didnt change this week. If you perform a full backup on that this weekend, youre duplicating almost everything you backed up last weekend. Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. Its no wonder that data deduplication products are so popular they were designed to eliminate all this duplicate data. And when they claim to reduce your backup storage footprint by 95 percent or more, this is exactly the data that theyre talking about.
Another option is to determine what different types of data you have and categorize it so that you can manage it most effectively, by moving less frequently-accessed data to lower-cost tiers of storage, and by deleting data that you no longer need or want. This will shorten your backup cycles and improve application performance, as well as reduce or delay the need to buy more primary storage capacity.
A third option is to put automated processes in place, based on policies that meet business requirements and/or service level agreements, to migrate, archive and delete data. There are several actions that can be taken on your data files based on criteria such as age, how long it has been since last access, which application created it, etc. These automated solutions can include:
Transparent migration of data from production storage systems to a hierarchy of secondary systems; the data remains on-line and available without any modifications to applications.
Archival of data, removing it completely from production systems and storing it in secure storage where retention policies can be set and managed.
Expiration of data, deleting it from all storage once it no longer needed or to meet corporate governance policies.
The last option is to compress and deduplicate the data you end up putting into your data protection and retention systems. Data deduplication is the most popular technology in this category, and well discuss it and the other technologies mentioned above in greater detail in future chapters of this blog.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for Chapter 3 in which we'll dig into the first step in effective data reduction.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-management storage-blog data-reduction backup deduplication hsm archive space-managment 4,101 Visits
Data Reduction Chapter 3: Avoiding data duplication
Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. (Its no wonder that data deduplication products are so popular; they were designed to eliminate all of this duplicate data. And when they claim to reduce your backup storage footprint by 90 percent or more, this is exactly the data that theyre talking about.)
But what if you never had to perform a full backup again after the initial one? If you backed up only the new and changed data always you wouldnt be creating all that duplicate data that needs an expensive deduplication solution to undo. Shorter backup windows, less storage required, and reduced storage acquisition costs would all be benefits of eliminating that weekly full backup. So would faster restore times, since deduplicated data wouldnt need to be re-hydrated in order to be useful.
IBM has smarter solutions that can help prevent the need to perform full backups. The products in the IBM Tivoli® Storage Manager portfolio of recovery management solutions all provide incremental-forever backups.
These are the common backup methodologies and how they compare on backup and restore processing:
Full + incremental
Backup This requires a full backup and then incremental backups over time usually a full backup each weekend with incremental backups for the following six days. Only data that has changed from the day before is transferred to tape. Then at the end of the week another full backup must be run.
Restore The full backup must be restored, then each days incremental data applied to it. This means that if you have a full backup and three incremental backups of the same file, it will be restored 4 times. It is a waste of time and money, and introduces risk.
Full + differential
Backup This requires a full backup and then differential backups over time usually a full backup each weekend with differential backups for the following six days. This means that all data that has changed since the last full backup will be backed up. If you assume a 10 percent daily change rate, then you will backup 100 percent (full) on the first day, 10 percent on the second, 20 percent on the third, 30 percent on the fourth, 40 percent on the fifth, 50 percent on the sixth, and 60 percent on the seventh. That means that you are backing up 260 percent of your data every week! Youll need 10 times your production capacity for just a month of backups.
Restore You would restore the full backup and then the last differential up to the date you were restoring to. This is faster and more reliable than the Full + Incremental model, but at the cost of much more storage capacity.
Backup This requires a full backup the first time you back up, and then only incremental backups. There are no extra transfers of data, which saves network bandwidth and transfer time, makes backup and restore faster, and can save thousands of dollars in disk and tape costs.
Restore You select the point-in-time that you want to restore from, and then restore the necessary files just once. This is much faster than with the other two methods.
The analysis shown in the figure above starts with 2TB of data and adds or changes 200GB per day. The assumption is that a full backup has already been performed to set the base.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 4chapter 4, where well cover the discovery and categorization of data to help move it intelligently throughout its lifecycle.
Delbert Hoobler 1000008PR6 email@example.com Tags:  tsm tivuser data-management software storage-blog storage fcm backup 2,221 Visits
Come join me for "Ask the Experts online Jam"!
What is the "Ask the Experts online Jam"?
The "Ask the Experts Online Jam" is a valuable opportunity for the YOU to connect with 75+ real world IBM experts on 30+ Tivoli products. These experts, many from IBM development, are recruited to answer your questions for a concentrated period of 12 hours. (8am eastern - 8pm eastern USA)
When is the "Ask the Experts online Jam"?
November 12th 2009 - 8AM - 8PM Eastern USA. To find the time in your city check out the World Clock meeting planner website.
Here's how it works in brief:
Step 1: You have a question - usually fairly technical;
Step 2: You find the expert that is best suited to answer the question by browsing for an expert by pre-defined category and product specific;
Step 3: You fill in a field on the "Ask the Experts online Jam" web application to submit the question.
Step 4: You receive an email answer to you question(s) and the Ask the Expert JAM web application is updated for other members to see.
Ask questions to over 75+ IBM experts on the following 30+ topics:
Datacenter Management tools: IBM Tivoli Monitoring, IBM Tivoli Composite Application Manager for Transactions and WebSphere/J2EE, Tivoli Application Dependency Discovery Manager, Tivoli Provisioning Manager, Tivoli Service Request Manager,
Network, Service Assurance and Events: Tivoli Netcool Impact, Tivoli Netcool Performance Flow Analyzer, Tivoli Netcool Performance Manager, Tivoli Netcool/OMNIbus, Tivoli network Manager, Tivoli Network Manager (Precision and NetView/d),
Asset Management: Asset Management for IT and Enterprise, Enterprise Asset Management Trends and IBM Maximo Industry Solutions,
Security: Tivoli Access Manager, Tivoli Identity Manager, Tivoli Federated Identity Manager, Tivoli Enterprise Acces Manager Single Sign On, Tivoli Compliance Insight Manager, Tivoli Directory Server, Tivoli Key Lifecycle Manager, Tivoli Security Information and Event Manager, Tivoli Security Policy Manager,
Storage: Tivoli Storage Flash Copy Manager on AIX and Windows, Tivoli Storage Manager, Tivoli Storage Productivity Center, Tivoli Storage Mangaer (TSM) Fastback,
z/OS: Netview for z/OS, OMEGAMON, Tivoli Security for Systems z: Tivoli zSecure Suite
Click here for more information.
I personally will be available from 8am to 2pm covering IBM Tivoli Storage FlashCopy Manager on Windows but there will also be many other storage experts available for the entire 12 hours. Please join us!
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  backup deduplication storage-blog space-managment archive hsm data-management data-reduction 2,584 Visits
Data Reduction Chapter 4: Categorize your data for migration & deletion
In the last chapter, we discussed eliminating the one of biggest causes of data growth the duplication of large amounts of data every time you perform a full backup. In this chapter, well explore the benefits of determining what different types of data you have and categorizing it so that you can manage it most effectively. This will help you set up policies to migrate of less frequently-accessed data to lower-cost tiers of storage, and to delete the data that you no longer need or want. By cleaning out your production storage, you will shorten your backup cycles, and improve application performance.
The next option for reducing the data storage footprint is to assess the different types of data and where they are in the data life cycle. If your organization is like most, you have all your unstructured data in flat file systems, which are probably full of data that you rarely, if ever, need to access. This may include data you are no longer required by law or policy to keep, but that you havent deletedsuch as old e-mails and memosthat could prove costly if discovered in legal proceedings.
The goal is to identify what data can be moved to less expensive tiers of storage, and what data can be deleted entirely from the environment. This will reduce the need to buy more primary storage capacity and make it easier to manage and protect what you have. Backup and restore performance will improve, and it will be easier to prove that you are meeting data retention and expiration policies.
IBM offers IBM Tivoli Storage Productivity Center for Data for this purpose. This solution reports on where your data is, sorted by access or saved dates, who owns it, the application that created it, and numerous other filters. From the intelligence you gain from these reports, you can set meaningful policies in your data management software to automatically take the appropriate action on data that shouldnt be clogging up your primary systems. Tivoli Storage Productivity Center for Data can also help identify and eliminate duplicate data, orphan data, temporary data and non-business data.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 5, where well talk about automating the migration, archival and expiration of your data.
Shawn Jaques 1200007FSY email@example.com Tags:  green-it storage-blog storage-management storage energy-effeciency green storage-software energy 2,460 Visits
Living in Boulder, Colorado, I am constantly hearing about "green" initiatives such as recycling, composting, alternative transportation, etc. Over the past several years, my family has been doing a much better job of lessening our impact on the Earth through things such as recycling, buying environmentally friendly products and even signing up for energy saving smart grid technology.
I appreciate when corporations also do their part to reduce their environmental impact by leveraging greener technologies. But let's face it, most corporations act based on the impact to the bottom line (both real or perceived) rather than the impact to the environment. Companies like IBM can make the decisions easier for clients by building products that improve performance while reducing energy or other environmental impacts.
I'm proud when IBM delivers "green" technology and thus wanted to point your attention to this video about energy efficient storage. Craig Smelser, VP of Security and Storage Development at IBM Tivoli, introduces some of the storage challenges that can be addressed with energy efficient IBM storage software solutions.
For more information, click here
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management archive data-reduction backup deduplication hsm space-managment 2,343 Visits
Data Reduction Chapter 5 - Automated Data Migration
In previous chapters, we’ve talked about the need to reduce your data storage footprint in order to help survive the tidal wave of data, and the first steps in doing so include eliminating unnecessary duplication of data, and then categorizing your data so you can make smarter decisions on where to store it, and for how long.
In this chapter, we take the next step by automating these data management policies through three distinct processes: migration, archival, and expiration. The net result of these processes is to remove unneeded data from your production storage systems, which will reduce or delay your need to acquire more expensive hardware and reduce administrative costs, all without impacting key operational processes.
In the old days of computing and storage management, the concept of transparently moving data from one tier of storage to another was called hierarchical storage management, or HSM. Given IBM’s heritage in mainframes, we still use that term today. More recently, this concept morphed into Information Lifecycle Management (ILM), but it’s the same basic principle – move older, less-frequently accessed data off your most expensive storage devices onto slower, less costly storage media.
HSM and ILM solutions work transparently in the background, automatically selecting and moving files from primary to secondary tiers of storage based on the policy criteria that you set, such as file size or length of time since a file has been opened. They leave a pointer, or stub file, where the data was originally stored so that users and applications don’t need to worry about where the data was moved; the software transparently reroutes the request for any moved files. These solutions automatically move data to the proper media based upon policies you set, freeing up valuable disk space for active files and providing automated access to the migrated files when needed.
Data migration solutions help customers get control of, and efficiently manage, data growth and its associated storage costs by providing automated space management. These solutions should provide the following key features:
• Storage pool “virtualization” helps maximize utilization of the managed storage resources.
• Restore management is optimized based on the location of the data in the hierarchy.
• Migration is transparent to the users and to applications.
• Migrations are scheduled to minimize network traffic during peak hours.
• Automatic migrations occur outside the backup window.
• By setting proper threshold limits, annoying ‘out of disk space’ messages can be eliminated.
The IBM Tivoli Storage Manager (TSM) family includes two solutions for automating the migration of data between multiple tiers of storage. TSM 6 for Space Management is for AIX, HP-UX, Solaris and Linux data, while TSM HSM for Windows is for Windows servers.
Tivoli Storage Manager data migration solutions not only help you clean up your primary storage systems to help them run more efficiently, they can also be used to easily move data to new storage technologies as they are deployed. Migrating files to Tivoli Storage Manager also helps expedite restores, because there is no need to restore migrated files in the event of a disaster.
The benefits of Hierarchical Storage Management or Information Lifecycle Management include:
• Improve response times of file servers by off-loading inactive data
• Slow or even stop the growth of your production storage environment
• Use existing storage assets more efficiently
• Reduce backup times and resource usage by focusing on active files only
• Eliminate manual file system clean-up activities
In the next chapter, we’ll look at HSM’s big brother – archiving.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Richard Vining 2700019R2A email@example.com Tags:  hsm space-managment storage-blog backup data-management archive deduplication data-reduction 2,652 Visits
Data Reduction Chapter 6 - Archiving
I’m back with the next installment on ideas for helping you to reduce the amount of storage capacity you need for an ever-increasing amount of data, and the amount of time you spend managing it. The last chapter covered transparently automating the migration of data from primary storage to secondary systems. An extension of this thought is archiving.
Archiving is another important data reduction technique for certain types of data. One example of this would be financial reporting data (such as weekly, monthly, quarterly, annual data), that needs to be retained for future trending, requirements or auditing, but does not need to consume valuable disk space where live data should reside. Historical medical records and customer statements also often fit into this category.
Archiving is for long-term record retention. It differs from backup in that it keeps files for a specific amount of time (where backup keeps a certain number of versions of a file) while removing the data from the primary production storage systems completely.
Key features of IBM archiving solutions include:
Using IBM archiving solutions for records retention can help you:
IBM offers a choice of solutions for archiving, depending on customer preferences and the applications involved.
Tivoli Storage Manager 6 includes an archiving capability directly integrated into its client backup software. It is policy based, allowing the administrator to set retention times. If the requirement for how long a file must be retained changes, all the administrator has to do is update the policy, and the solution will retroactively update the already archived files; there is no need to restore and re-archive, as some competitive offerings require. Tivoli Storage Manager also offers the option of integrating data from many different applications into your archive repository, and the archive repository can be a virtualized pool of heterogeneous storage systems.
IBM Information Archive, which contains a specialized version of Tivoli Storage Manager called IBM System Storage™ Archive Manager, is a standalone archive appliance that ingests data directly from more than 40 applications including messaging, healthcare and medical imaging, design and engineering, document management, and others.
Database archiving with IBM Optim and Tivoli Storage Manager
IBM Optim™ Data Growth Solution is a unique database archiving solution that transparently migrates unneeded records from database tables to secondary storage. Like Tivoli Storage Manager’s space management and archive solutions, Optim provides database and storage administrators with a range of cost and performance benefits.
There are also benefits to using Tivoli Storage Manager in conjunction with Optim, which works seamlessly with Tivoli Storage Manager’s application program interface (API) to move archived database records directly into Tivoli Storage Manager’s storage hierarchy.
Optim can also be used with other file-based backup/restore products; however, this involves a two-step process to first archive the data and then back it up. When used with Tivoli Storage Manager, Optim automatically archives database records and then uses the API to store/archive data in a Tivoli Storage Manager storage pool hierarchy. With any other file-based backup/restore product, Optim uses standard file operations to store/archive data in a disk-based file system, and then the backup product can backup the file to supported backup media.
Using Optim and Tivoli Storage Manager together allows you to:
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 7, where we’ll talk about data deduplication and compression as the next options in an effective, holistic approach to reducing your overall data storage footprint.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  scv ibmstorage storage-blog protectier deduplication virtualization storage 3,279 Visits
You've probably heard your mother say "you never get a second chance to make a first impression". So, since today marks my first entry into the blogosphere, I wanted to hit a home run, providing not only some interesting perspective, but also some hard facts that readers can use to potentially save some time and money.
If you have been paying much attention to developments in storage and computing infrastructure in the last few years, you have noticed a significant trend toward virtualization. Servers aren't servers any more, they are virtual machines. Tapes aren't tapes any more, they are virtual tape libraries like the IBM TS7650 ProtecTIER Deduplication Appliance. And in the area of disk virtualization, the most widely adopted approach is the IBM SAN Volume Controller (SVC).
Up until now, disk virtualization has been an enterprise-wide thought. Storage managers who are tasked with taking care of hundreds of TB's, and often PB's of disks have for years turned to SVC to help eliminate the pain of migrating data between arrays. For these administrators, disk virtualization with SVC has also helped provide a common set of management interfaces and proceedures across storage from different vendors, and has helped to create a common set of services like thin provisioning, snapshotting, and mirroring across different tiers of storage.
Not every storage manager, though, is responsible for PB's, or even hundreds of TB's of storage. Most administrators are just looking for an affordable and 'easy to manage' means of satisfying the next request for more storage on Exchange, or SAP, or... About a month ago, IBM introduced some important changes in its mid-range disk virtualization product, SVC EE, designed with these storage managers in mind.
Perhaps the best way to describe these changes is with a picture... (Click on the picture to enlarge)
One of the challenges with traditional disk arrays is that they are relatively inflexible. Think about it... the arrays that have a lot of function (thin provisioning, excellent snapshotting, mirroring, etc.) are generally large, monolithic things that can take up a lot of real estate and burn a lot of power before you get to the first byte of storage. On the other hand, the arrays that are more modular -- allowing incremental growth -- generally don't offer the best software capabilities. And what's more, all of them generally charge an arm and a leg for the software capabilities they do offer.
The important thing IBM did was to package its virtual controller software in an affordable form factor and price it in such a way that mid-sized administrators can build and grow their storage infrastructure modularly. Do you need more disk capacity for a new application? Add an IBM DS3400 SAS disk enclosure. Do you have plenty of capacity but just want some more performance or connectivity? Add an SVC 8A4 controller pair. Do you have plenty of performance but just want some more capacity for archiving? Add a DS3400 SATA disk enclosure. With this sort of modular approach to scaling, the incremental cost of adding capacity can be greatly reduced.
Regardless how you choose to grow your virtual disk system, there are a valuable set of services that are all included in the base software license (e.g. no extra charge). They include:
Although I have used IBM DS3400 disk encousures in my example, a virtual disk system of unlimited size can be constructed using any number of IBM DS3400, DS4000 or DS5000 family disks. SVC EE can also virtualize up to 250 disks from other IBM or non-IBM disk systems.
Lower incremental cost for adding capacity. Efficient SAS and SATA disks. A valuable set of software functions included in the base price. Common management from the smallest configuration to the largest. Would that help save some time and money?
Richard Vining 2700019R2A email@example.com Tags:  hsm data-management backup storage-blog archive data-reduction space-managment deduplication 3,413 Visits
Data Reduction Chapter 7: Data Deduplication
As discussed earlier chapters, data deduplication is a hot technology that is used to reduce data storage capacity requirements. If you employ smart choices in backup and data management processes, you might not need data deduplication. But if you keep all of your inactive and unimportant data on your production storage systems, and use backup software that forces you to perform repetitive full backups of all that static data, then data deduplication can provide you with a huge benefit.
The basic idea behind data deduplication is to store just one copy of any data object, and place pointers to the single copy wherever duplicates are eliminated. Some solutions do this at a file level, so that the files have to be exactly the same to be deduplicated. This is often called single-instance storage (SIS). Other solutions deduplicate data at a fixed or variable block length. IBM’s solutions use a blended approach based on the size of the data—file-based for smaller files, and variable block for larger files.
Most deduplication solutions run a checksum algorithm against the selected data to create a hash signature, then check to see if that signature has ever been seen before. If it has, the data is discarded and a pointer to the already stored data is put in its place. A small number of high-end solutions perform a complete byte-level differential comparison of the data to remove all potential for “data collisions,” where two distinct data blocks may share the same hash signature.
Data deduplication can and does occur at many points in the data creation and management life cycle. In general, these points of deduplication can be broken into source-side, where the data is created, and target-side, where it is stored and managed. Backup applications, for example, can perform source-side deduplication by not transferring data that has previously been backed up over the LAN or WAN, saving on bandwidth.
On the target side, the most popular use of deduplication is in virtual tape libraries, or VTLs. These disk-based systems emulate tape libraries and drives, but apply deduplication to store equivalent amounts of data on disk very cost-effectively while providing performance advantages over tape. Performing deduplication on tape-based systems is considered to be a bad idea, given the portable nature of tapes and the need to recycle them over time; it would be very difficult to guarantee that you maintain the original data for all of the pointers that are out there.
Today, IBM offers two compelling data deduplication solutions. The Extended Edition of Tivoli Storage Manager 6 includes deduplication capabilities to eliminate duplicate data that has been backed up from multiple production systems. Again, TSM’s progressive-incremental backup methodology does not create massive amounts of duplicate data, so the deduplication is only effective when the same data exists on different systems.
The other solution is the IBM System Storage ProtecTIER® family of deduplication systems for reducing data coming from multiple sources, including Tivoli Storage Manager servers, backups from other backup systems, or archive software solutions.
A lot of customers ask when they should use TSM deduplication and when they should use ProtecTIER. I’ll cover this question in detail in my next blog, but the simple answer is:
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  storage-software storage tivoli storage-blog tsm storage-management 8 Comments 9,061 Visits
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
I encourage you to take a look at Windows VSS snapshots and FlashCopy Manager to see how they might help you. Enjoy!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  warming our summit global mgm tivoli gore pulse vegas las climate rational websphere management copenhagen smarter crisis choice al ibm service planet 1,992 Visits
In response to: Cooler Planet Crusader-In-ChiefI agree that the guest speaker selection makes a lot of sense with regards to building a smarter planet. It will be interesting to hear what Al Gore has to say.
Richard Vining 2700019R2A email@example.com Tags:  data-management backup deduplication data-reduction hsm archive storage-blog space-managment tivoli 3,830 Visits
Data Reduction Chapter 8: Deduplication with Tivoli Storage Manager 6, FastBack and ProtecTIER
So far in this series, we’ve detailed the challenges that the tidal wave of data is placing on storage administrators, and how a smarter, more holistic and comprehensive approach to data reduction is needed to survive in a way that let’s you do more with less.
We covered eliminating the largest source of duplicate data (full backups) and automating the migration, archiving and deletion of older data. Then, in chapter 7, we covered the basics of data deduplication. Now we’ll detail the differences between IBM’s deduplication offerings, and when to best use each.
Let’s talk first about the deduplication capabilities of Tivoli Storage Manager (TSM). This feature is included at no additional charge for TSM 6 Extended Edition customers. This solution can help to reduce recovery times by enabling you to store more backup data and recovery points on disk rather than tape. It works with the data from all sources – via normal backups, data imported via the TSM API, as well as archive and HSM data. TSM deduplicates your disk-based data pools as a post-process, so there is no impact on backup performance. After running, it automatically reclaims the storage that has been freed up.
TSM already eliminates the most common cause of duplicate data – full backups – so the reduction ratios you can expect from TSM’s deduplication solution are fairly modest – the average is about 40%. But when combined with its progressive incremental backup approach and built-in data compression, TSM’s effective data reduction rate is extremely competitive with any other solution on the market, as has been detailed in a commissioned report written by Enterprise Strategy Group (ESG), available here (fair warning – registration required – sorry):
Announced today, Tivoli Storage Manager FastBack v6.1 also includes target-side data deduplication to help reduce the capacity required in the FastBack backup repository, adding to its value as the leading near-instant recovery solution on the market for business critical Windows servers and remote/branch offices. Also announced today was Linux support and tighter integration with the Tivoli Storage Manager Integrated Solutions Console (ISC), delivering on IBM’s vision of true enterprise-wide Unified Recovery Management.
IBM System Storage ProtecTIER is a technology leader in performance, scalability, data integrity and reliability. In true apple to apple comparisons this solution is the fastest on the market in real customer environments. A single ProtecTIER system can easily scale in both performance (1000MB/sec) AND capacity (1PB of deduplicated data). ProtecTIER is one of the few solutions that doesn’t rely on a hash algorithm and performs a byte level differential to ensure data is a duplicate for enterprise class data integrity. And ProtecTIER features all IBM best of breed components versus inexpensive OEM'd parts found in competitive products.
ProtecTIER has been proven in very large production environments and is supported worldwide by IBM’s services operations. The TS7650 ProtecTIER Deduplication Family ranges from small (7TB) to medium (18TB) to large-scale (36TB) appliances. And the TS7650G gateway offerings allow you to add the storage of your choice, up to 1PB. Active-Active cluster configurations also provide high availability capabilities.
Video on ProtecTIER: http://www.youtube.com/watch?v=6Uk41HpCTqo&feature=related
Review - Choosing TSM or ProtecTIER for Data Deduplication
While TSM works very well in ProtecTIER environments, you wouldn’t use both TSM deduplication and ProtecTIER deduplication simultaneously. That would require twice as much work for no additional benefit. So when should you choose one over the other? Both solutions offer the benefits of target side deduplication: greatly reduced storage capacity requirements (especially when using TSM’s progressive incremental backup). You’ll have lower operational costs, energy usage and Total Cost of Ownership. You also get faster recoveries with more data on disk.
Use TSM 6 built-in data deduplication when you desire that deduplication operations be completely integrated within TSM. You want the benefits of deduplication without the costs of separate hardware or software – it ships for free with TSM 6 Extended Edition. Or you desire end to end data lifecycle management with minimized data store requirements.
Use ProtecTIER when:
• You need the highest performance up to 1000 MB/sec or more
• You have a large amount of data and need scalable capacity and performance
• You need inline deduplication to avoid the operational impact of post processing
• You are deduplicating across multiple TSM (or other backup) servers
• You don’t have TSM and are performing weekly full backups.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 9, where we’ll summarize IBM’s holistic approach to data reduction and show you how we can help you survive the tidal wave of data.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli continuous-data-protectio... ibmstorage tivoli-storage-manager-fa... ibm tsm-fastback data-protection 1,998 Visits
New Product Announced Dec. 15, 2009
IBM Tivoli Storage Manager FastBack for Workstations is an automated, continuous data protection and recovery software solution for desktop and laptop computers, with central management for thousands of systems, and integration with other Tivoli Storage Management offerings.
Here is the URL for this bookmark: http://www-01.ibm.com/software/tivoli/products/storage-mgr-fastback-workstation/
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  information infrastructure tivoli ibm management 2010 dynamic websphere pulse service-management rational 2,121 Visits
In response to: The BIG Questions at PulseThose are great questions.
Additionally, you should consider asking yourself these questions that relate to, "What's the Value of this Data to the organization?"
1. Do you have a plan for recovery of that data if lost or corrupted?
2. How fast is that data growing and how are you dealing with the growth?
3. How are you providing increasing service levels with lower cost?
By attending the Storage and Information Infrastructure track at Pulse 2010, you'll find the answers to the questions I've added along with answers to any additional questions you may have concerning your storage, data, and information management.
Take a look at the video below and see how Tivoliman Tames the Data Juggernaut Beast.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  deduplication hsm backup data-reduction tivoli storage-blog space-managment data-management archive 1,704 Visits
Data Reduction Chapter 9: Surviving the tidal wave of data with IBM data reduction solutions
I hope everyone had a safe and enjoyable holiday, and I’m looking forward to an exciting and prosperous new year. I’d like to take this opportunity to summarize the topics I’ve been covering in this series of data reduction blogs, and give new readers links to the specific topics that you might be interested in.
Please ask yourself these questions:
Through this series, we’ve shown that IBM is the only vendor with a comprehensive set of data reduction solutions that can be applied at multiple points throughout the data creation and management lifecycle. IBM’s broad portfolio of data reduction solutions gives us the freedom to solve your data storage and management issues with the most effective technology for your particular situation. And IBM is continuing to invest in research and development to further develop and deliver the advanced features our customers are requesting.
To learn more, please download my new Data Reduction whitepaper, view the on-demand webcast with Nick Allen, or visit the Data Reduction solution site.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
John Foley 12000084U0 FOLEYJOH@US.IBM.COM Tags:  storage-management tivoli ibm storage-blog patel milan cloud ibmstorage video storage-cloud 2,753 Visits
Cloud storage technologies made impressive strides in 2009, and the trend looks to build on that momentum in 2010. IBM is expecting steady growth in cloud storage deployments, especially in the areas of test environments, Web serving, and other non-mission-critical scale-out storage needs.
Standards in this area are just beginning to be discussed and will also be evolving in 2010. Standard file protocols such as CIFS and NFS are obvious starting points for cloud storage access, but other approaches utilizing object storage techniques have also been proposed.
To prepare for cloud storage within the data centers, IT managers will need to determine a small number of focused areas to use as starting points. In the short term, cloud storage is a technology that will be deployed to address specific and unique requirements across the enterprise. Therefore, it is recommended to carefully choose areas to pilot the technology where managers can gain insight into where they can extend usage into other areas and build skills for when it becomes more widely deployable.
Watch this video to gain important insight into what it takes to deploy and manage high available Cloud Storage environments.
Click here for additional information about IBM Cloud Storage Solutions