IBM Systems Storage Software Blog
Milan Patel 060001K86W email@example.com Tags:  information-infrastructur... data-management backup recovery storage-blog archiving 1,459 Visits
Get ready for Pulse 2010, February 21-24 at the MGM Grand Hotel in Las Vegas. Pulse 2010 will be one of the most important storage and service management conference of the year, and one that will deliver the information you need to hear directly from your peers, our partners and your IBM Storage team. The conference will include an impressive storage management agenda covering everything from emerging storage technologies, architectures, back and recovery to archiving, and managing storage in virtualized data centers and server environments. Once again we are very excited to have your peers share best practices from multiple industries, geographies and companies of various sizes.
As your business and data centers continue to evolve, we continue to evolve and adapt our storage and information infrastructure management solutions to meet your growing needs and facilitate your journey to a dynamic storage infrastructure with innovative products and services that matter to your bottom line. Pulse 2010 provides us the opportunity to showcase our commitment to you, and you will see first hand how IBM's increased investment in Storage development has produced an aggressive and exciting roadmap that will expand and enhance our capabilities.
Detailed communications on the hotel and Call for Presentations will be coming your way shortly. The key to successful event is your participation and we hope you play an active role in the agenda. Please visit Pulse 2010 website for more details.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog ibmtivoli ibmstorage information-infrastructur... pulse2010 storage tivoli dynamic-infrastructure pulse service-management ibm 2 Comments 2,448 Visits
In preparation for Pulse 2010 in Vegas, I interviewed John Connor, the Pulse track lead for Storage and Information Infrastructure, to help you generate good ideas for submitting your call for speaker abstracts for Pulse. John will actually be reviewing the submissions with a team of other folks, so here is some advice that you can leverage to increase your chances of being accepted to speak at Pulse.
Me: What are the hot topics in the area of storage and information infrastructure today?
John: The hot topics in the area of storage and information infrastructure today are how, in today's tight economy, customers are leveraging storage in their information infrastructure to improve scalability, addressing the performance of their storage management assets, cutting capital expenditures by reducing duplicate data to lower storage capacity needs and simplifying the overall management of their storage infrastructure.
Me: Which topics would you like to see presented at pulse are
John: Ideally I would like to see sessions at Pulse that highlight customer success stories, how Tivoli storage management and/or IBM storage solutions helped customers address the challenges we discussed above.
Me: Who are good candidates for submitting abstracts and why?
John: The best candidates to talk about these successes are the folks who implemented them, which would be our customers. Customers are able to discuss their return on investment and how the IBM storage solutions are benefiting them in their everyday business operations. Another good candidate would be our business partners, accompanying and co-presenting with their clients on the IBM storage solutions they've implemented.
Me: What are you looking for in a good proposal?
John: As I mentioned earlier about the topics I would like to see presented, a good proposal is a customer success story around IBM storage solutions, including Tivoli storage management software, and/or storage hardware and storage services. This proposal should describe the initial pain points or problems that existed, how our solutions helped and the lessons learned that could be applied to other customer situations. This type of proposal and session at Pulse will help others learn from each other.
Me: What are the benefits of submitting an abstract for Pulse?
John: Submitting your abstract is a great way to gain visibility for your work, and your particular solution. Customers that submit abstracts and that are selected will receive a complimentary pass to attend Pulse at no charge ($1,995 value) and admission to on site VIP client lounge. Attending Pulse is not only a great way to share your companys success by implementing IBM storage solutions, but it is also a great education and networking opportunity.
Me: What is the deadline for submitting call for speaker abstracts?
John: The deadline to submit your abstract is Nov. 20th. Dont delay, submit your proposal today.
With such great guidance from John, youre sure to write a perfect proposal. If you have any questions on submitting abstracts for Pulse or want feedback on an idea, just leave a blog comment. Also, be sure to check out this justification letter if you need that extra edge to convince your boss of the value of attending Pulse. I hope to see you there!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm storage-software storage ibmstorage tivoli ibmtivoli storage-blog storage-management 1 Comment 5,381 Visits
Welcome to the Tivoli Storage blog.
We have gathered a team of SMEs from various areas of the business to discuss a variety of topics, spanning different interest areas including customer success stories, upcoming events, Business Partner spotlights, technical tips and tricks, product strategy, roadmaps and hot topics -- and of course, topics of interest to you!
Introducing the team!
BJ Klingenberg: Senior Technical Staff Member - Storage Software, IBM Software Group
BJ has over 25 years of storage software strategy and development experience. He has held various technical and management positions, nearly all of which have been related to storage software. His experience in Enterprise storage management includes DFSMS, DFSMShsm, DFSMSdss, and also Tivoli Storage Manager, Tivoli Storage Productivity Center (TPC) as well as System Storage SAN Volume Controler (SVC). He has also been involved in projects which apply ITIL management best practices to Enterprise Storage Management. BJ is currently focusing on storage archiving solutions. BJ is a graduate of the University of Illinois Urbana/Champaign where he received a Bachelor of Science degree in Computer Science, and holds a Master of Science Degree in Computer Science from the University of Arizona
Dave Rice: Business Partner Marketing, Tivoli Storage Software
Dave currently works in IBMs Worldwide Software Group where he drives Business Partner Marketing for Tivoli storage software and also has a focus on Asia Pacific and Japan geographies. In this role, Dave influences Business Partner sales pipeline through, lead/pipeline analysis, progression activities, partner communications, and implementing programs that provide Business Partner Opportunity Identification. Dave has been in a broad set of storage software marketing roles for the past 13 years, and has 35 years with IBM. Outside of IBM, Dave's interests include astronomy, as well as home and life improvement projects.
Del Hoobler: Senior Software Engineer
Del is a Senior Software Engineer that has worked for IBM for over 20 years in software design, development and services. For the past 13 years, he has worked on designing and developing software products for the IBM Tivoli Storage Manager (TSM) suite of products. Most recently, Del was the technical development lead for the TSM Windows snapshot (VSS) support for Microsoft Exchange Server and Microsoft SQL Server. Del enjoys working with people and helping solve their complicated IT problems.
Devon Helms is currently an intern with the IBM Tivoli Software group and a second year MBA candidate at the Paul Merage School of Business at UC Irvine. His studies are focus on business strategy and corporate finance. Before returning to the academic world to pursue his MBA, Devon was a business operations and technology consultant. He has been involved in hundreds of engagements, analyzing and improving his customers business processes. After his studies are complete, Devon wants to continue to help clients improve the performance of their businesses through business process and financial analysis. In his free time, Devon is an avid marathon runner, rock climber, and SCUBA diver. Devon lives in Lakewood, CA with his lovely wife, Shana and his 8 year old Siberian Husky and faithful running partner, Frosty.
Greg Tevis: Tivoli Storage Technical Strategist
Greg has over 27 years in IBM storage hardware and software development. He worked in ADSM/TSM architecture and technical support in the 1990s and was one of the original architects of IBM's storage resource management solution, Tivoli Storage Productivity Center (TPC). He currently has responsibility for technology strategy for all Tivoli Storage and was involved in all of the recent IBM Storage acquisitions including XIV, Diligent, FilesX, Novus Consulting, and Arsenal Digital.
Jason has been the product manager for the Tivoli Storage Productivity Center (TPC) family since joining IBM in 2006. Prior to joining IBM, Jason was a product manager at EMC and Prisa Networks, responsible for the road map and strategy of various storage management offerings. When not helping define the direction for TPC, Jason acts as the President for Classic Soccer Club, a youth soccer club where his son currently plays.
John Connor: Product Manager
John is the Product Manager for IBMs flagship data protection and recovery offerings, the Tivoli Storage Manager family. During Johns tenure as product manager, TSM has experienced strong growth; growing faster than the overall market, and gaining market share. Prior to joining the Tivoli Storage Manager team in 2005, John helped drive the business strategy for IBM Retail Store Solutions. Prior to that, John had product and marketing roles in various IBM software businesses including WebSphere and networking software. John has an MBA from Duke University and an undergraduate degree in electrical engineering from Manhattan College. In his spare time, John enjoys competing in triathlons and has successfully completed an Ironman triathlon.
John R. Foley Jr.: Product Marketing Manager
John is currently a marketing manager within IBM's Tivoli storage software marketing team. John has over 20 years of experience in the areas of storage hardware, storage software and system networking. He has held positions in management, product line management, strategy, business development and marketing. In the past 10 years, he has served on multiple storage projects including SAN storage (fibre channel & iSCSI), Network Attached Storage (NAS) and fibre channel switch offerings. Most recent projects include the introduction of IBM's System Storage N series portfolio stemming from the NetApp OEM agreement and the release to market of IBM's newly introduced Tivoli Storage Productivity Center Version 4 and IBM Information Archive Version 1.
Kelly Beavers: IBM Storage Software Business Line Executive
Kelly joined the IBM Storage Software team in 2004 as Director of Strategy and Product Management for Storage Software and Solutions. Her team is responsible for guiding the development and release of products that capitalize on market/technology trends, and for defining and executing tactical go-to-market plans for IBM storage software solutions across both the Tivoli and Systems Storage brands. Kelly has 28 years with IBM where she's held a variety of roles including Finance, Pricing, Tivoli Channel Development, Director of Customer Insight, managing Market Intelligence, Customer Relations and Marketing Operations. Kelly is married with two daughters, ages 19 and 12.
Matt Anglin: Tivoli Storage Manager Development
Matt has been a member of the Tivoli Storage Manager Server Development Team for 15 years. His areas of expertise include data movement to and within the server, deduplication, shredding, and DB2 interactions. He is the AIX platform export in TSM, and is knowledgeable about other Unix, Linux, and Windows plaforms. Matt lives in Tucson, Arizona.
Matthew Geiser: Manager, Storage Software Product Management
Matt joined IBM in 2001 and has worked in product management and product development for Storage Software offerings including SAN Volume Controller, Tivoli Productivity Center, Tivoli Storage Manager and IBM Information Archive. Matt's current responsibilities include managing the product management team for the storage infrastructure management offerings. Prior to IBM, Matt worked in a variety of operations, project management and software development roles in the banking and energy industries.
Milan Patel: Senior Product Marketing Manager
Milan is responsible for Product Marketing of IBM storage software for virtualized server environments, storage clouds and of course every day issues in storage management like backup, recovery, archiving and replication. Milan has been with IBM for over 6 years working in server and storage systems and storage software marketing groups. Prior to that, Milan spent 13 years in various capacities from development to product management of various server subsystems and systems management.
Richard Vining: Product Marketing Manager
Rich is the Product Marketing Manager responsible for the IBM Tivoli Storage Manager portfolio of products. Rich joined IBM in April 2008 as part of the acquisition of FilesX, where he served as Director of Marketing. Rich has more than 20 years of experience in the data storage industry, holding senior management roles in marketing, alliances, customer support and product management at a number of leading edge companies, including Signiant, OTG Software, Plasmon and Cygnet. Rich enjoys eating, drinking, travelling and golfing (but doesn't everybody?)
Rodney Fannin: Worldwide Channel Manager, Tivoli Storage Software
Rodney has over 15 years of experience in working with Business Partners. Primary responsibilities include refining the channel strategy for Storage software and developing sales and marketing tactics to increase reseller revenue worldwide. Rodney is also a contributing author for the BP Spotlight on our blog.
Roger Wofford: Product Manager
Roger is currently a Product Manager in Tivoli Storage Software. He has experience in Manufacturing, Development, Marketing and Sales within IBM. He enjoys golf, swimming and the Rocky Mountains. Roger plans to blog about how customers use archiving solutions in their storage environments.
Ron Riffe: IBM Storage Software Business Strategist
Ron is currently the business strategist for IBM Storage Software. During the last six years, Ron has been devising and implementing IBM's storage software strategy with a focus on creating greater client value through integrating IBM storage software and storage hardware offerings. Ron has managed storage systems and storage management software for more than 23 years, holding positions in senior management, product line management, strategy and business development for both IBM System Storage and IBM Tivoli Storage. Ron has written papers on the synergies of storage automation and virtualization and frequently speaks at conferences and customer locations on the subject of storage software. Prior to joining IBM, Ron spent 10 years as a corporate storage manager for international manufacturing firm Texas Instruments after receiving a B.S. in Computer Science from Texas A&M University.
Shawn Jaques: Manager, IBM Tivoli Storage Product Management
Shawn has been in his current role as manager of storage software product management for nearly three years. The team is responsible for product strategy, content, positioning and pricing of IBM storage software solutions. Prior, Shawn had product and market management roles in other Tivoli product areas as well as a stint in Tivoli Strategy. Before joining IBM, Shawn was a Consulting Manager at Cap Gemini consulting and an Audit Manager at KPMG. Shawn has a Master of Business Administration from The University of Texas at Austin and a Bachelor of Science from the University of Montana. He lives in Boulder, Colorado and enjoys fly-fishing, skiing and hiking with his wife and kids.
Terese Knicky: Analyst Relations Tivoli
Terese is with Tivoli's analyst relation team covering Storage, System z, Job Scheduling and IBM's General Enterprise solutions. Terese was born and raised in Omaha, NE and transplanted to Texas where she enjoys watching her two boys play college football.
And finally, let's talk about me. I'm Tiffeni Woodhams and I have been with IBM for nearly seven years. Currently, I am a Tivoli Storage Marketing Manager where I am responsible for general marketing activities, ranging from pipeline measurement and tracking, providing marketing execution guidance and communications to the geography teams; Tivoli Storage Social Media lead and co-lead for IBM Storage Social computing strategy. I also work on major launches like Dynamic Infrastructure and Information Infrastructure providing the storage messaging and linkages. Prior to this role, I have held several other marketing positions including Tivoli Provisioning Go-to-Market Manager, Benelux Software Marketing Manager focusing on Tivoli, WebSphere, and Lotus, Americas Tivoli Marketing Manager, and Tivoli Launch Strategist. In my spare time, I enjoy playing sports (basketball, softball, and golf), coaching JV girls basketball, riding horses, and spending time with family and friends.
Now that you know a little background on each of the team members, we hope that you will let us know some of your interest areas when it comes to IBM Storage and IBM Tivoli Storage Software solutions. Please post comments to this blog and let us know what you want to hear about.
Some topics we will be discussing in the next month include:
Pulse 2010, the Premier Service Management Event
Data Reduction - the steps to get to where you want to be
Archiving - why you need to do it
Unified Recovery Mangement
New Product announcements and roadmaps.
Thanks and we look forward to hearing your feedback.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management space-managment data-reduction hsm deduplication backup archive 2,989 Visits
Data Reduction Chapter 1: The challenges posed by the tidal wave of data
We're storing and using more data than ever before. The volume of data is growing exponentially, government regulations are expanding and competitive pressures are increasing forcing us to retain more of our data for longer periods of time. But our budgets are flat or being cut. And as we become more dependent on digital information, the costs of losing any of it are increasingly painful. The bottom line, of course, is that we need to do a better job of managing our data assets, and as these assets grow and our budgets shrink, we need to do more with less. So we need smarter solutions.
Storage administrators are on the front lines of the Tidal Wave of Data battle. Some of the challenges from data growth that administrators are struggling with include:
- It takes longer to perform backups; often not completing within backup window allowances; some data is not being adequately protectedIBM can help you build a dynamic storage management infrastructure that will enable you to cope with all of these challenges. We have solutions to help reduce your data storage footprint, and the goals that we set out in these solutions are: to reduce your capital and operational costs; to improve your application availability and service levels; and to help you mitigate the risks associated with losing data and a rapidly changing environment.
With these solutions you should: need less storage; have less data to manage; experience less downtime; and be more competitive. To learn more, please visit the Data Reduction Solutions web page and stay tuned for Chapter 2, where we will outline a holistic and comprehensive approach to data reduction.
Richard Vining 2700019R2A email@example.com Tags:  archive storage-blog data-management deduplication hsm backup data-reduction space-managment 2,191 Visits
Data Reduction Chapter 2: Surviving the tidal wave of data - options for data reduction
In chapter 1, we discussed the struggles that storage administrators are having with the tidal wave of data. In this chapter, well begin talking about how data reduction technologies can help you survive and even thrive in the face of these challenges.
IBM takes a holistic approach to data reduction, unlike competitors that offer point solutions to problems that they may in fact be causing. For example, a huge contributor to data growth is the repeated duplication of large amounts of data every time you perform a full backup.
So, one option is to avoid data growth from unnecessary data duplication, by only backing up data that has changed since the last backup. This addresses the cause of the problem, not the symptom. For example, if you have a 5 percent per week data change rate, 95 percent of your data didnt change this week. If you perform a full backup on that this weekend, youre duplicating almost everything you backed up last weekend. Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. Its no wonder that data deduplication products are so popular they were designed to eliminate all this duplicate data. And when they claim to reduce your backup storage footprint by 95 percent or more, this is exactly the data that theyre talking about.
Another option is to determine what different types of data you have and categorize it so that you can manage it most effectively, by moving less frequently-accessed data to lower-cost tiers of storage, and by deleting data that you no longer need or want. This will shorten your backup cycles and improve application performance, as well as reduce or delay the need to buy more primary storage capacity.
A third option is to put automated processes in place, based on policies that meet business requirements and/or service level agreements, to migrate, archive and delete data. There are several actions that can be taken on your data files based on criteria such as age, how long it has been since last access, which application created it, etc. These automated solutions can include:
Transparent migration of data from production storage systems to a hierarchy of secondary systems; the data remains on-line and available without any modifications to applications.
Archival of data, removing it completely from production systems and storing it in secure storage where retention policies can be set and managed.
Expiration of data, deleting it from all storage once it no longer needed or to meet corporate governance policies.
The last option is to compress and deduplicate the data you end up putting into your data protection and retention systems. Data deduplication is the most popular technology in this category, and well discuss it and the other technologies mentioned above in greater detail in future chapters of this blog.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for Chapter 3 in which we'll dig into the first step in effective data reduction.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-management storage-blog backup data-reduction deduplication hsm space-managment archive 3,686 Visits
Data Reduction Chapter 3: Avoiding data duplication
Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. (Its no wonder that data deduplication products are so popular; they were designed to eliminate all of this duplicate data. And when they claim to reduce your backup storage footprint by 90 percent or more, this is exactly the data that theyre talking about.)
But what if you never had to perform a full backup again after the initial one? If you backed up only the new and changed data always you wouldnt be creating all that duplicate data that needs an expensive deduplication solution to undo. Shorter backup windows, less storage required, and reduced storage acquisition costs would all be benefits of eliminating that weekly full backup. So would faster restore times, since deduplicated data wouldnt need to be re-hydrated in order to be useful.
IBM has smarter solutions that can help prevent the need to perform full backups. The products in the IBM Tivoli® Storage Manager portfolio of recovery management solutions all provide incremental-forever backups.
These are the common backup methodologies and how they compare on backup and restore processing:
Full + incremental
Backup This requires a full backup and then incremental backups over time usually a full backup each weekend with incremental backups for the following six days. Only data that has changed from the day before is transferred to tape. Then at the end of the week another full backup must be run.
Restore The full backup must be restored, then each days incremental data applied to it. This means that if you have a full backup and three incremental backups of the same file, it will be restored 4 times. It is a waste of time and money, and introduces risk.
Full + differential
Backup This requires a full backup and then differential backups over time usually a full backup each weekend with differential backups for the following six days. This means that all data that has changed since the last full backup will be backed up. If you assume a 10 percent daily change rate, then you will backup 100 percent (full) on the first day, 10 percent on the second, 20 percent on the third, 30 percent on the fourth, 40 percent on the fifth, 50 percent on the sixth, and 60 percent on the seventh. That means that you are backing up 260 percent of your data every week! Youll need 10 times your production capacity for just a month of backups.
Restore You would restore the full backup and then the last differential up to the date you were restoring to. This is faster and more reliable than the Full + Incremental model, but at the cost of much more storage capacity.
Backup This requires a full backup the first time you back up, and then only incremental backups. There are no extra transfers of data, which saves network bandwidth and transfer time, makes backup and restore faster, and can save thousands of dollars in disk and tape costs.
Restore You select the point-in-time that you want to restore from, and then restore the necessary files just once. This is much faster than with the other two methods.
The analysis shown in the figure above starts with 2TB of data and adds or changes 200GB per day. The assumption is that a full backup has already been performed to set the base.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 4chapter 4, where well cover the discovery and categorization of data to help move it intelligently throughout its lifecycle.
Delbert Hoobler 1000008PR6 email@example.com Tags:  tsm tivuser data-management software storage-blog storage fcm backup 1,846 Visits
Come join me for "Ask the Experts online Jam"!
What is the "Ask the Experts online Jam"?
The "Ask the Experts Online Jam" is a valuable opportunity for the YOU to connect with 75+ real world IBM experts on 30+ Tivoli products. These experts, many from IBM development, are recruited to answer your questions for a concentrated period of 12 hours. (8am eastern - 8pm eastern USA)
When is the "Ask the Experts online Jam"?
November 12th 2009 - 8AM - 8PM Eastern USA. To find the time in your city check out the World Clock meeting planner website.
Here's how it works in brief:
Step 1: You have a question - usually fairly technical;
Step 2: You find the expert that is best suited to answer the question by browsing for an expert by pre-defined category and product specific;
Step 3: You fill in a field on the "Ask the Experts online Jam" web application to submit the question.
Step 4: You receive an email answer to you question(s) and the Ask the Expert JAM web application is updated for other members to see.
Ask questions to over 75+ IBM experts on the following 30+ topics:
Datacenter Management tools: IBM Tivoli Monitoring, IBM Tivoli Composite Application Manager for Transactions and WebSphere/J2EE, Tivoli Application Dependency Discovery Manager, Tivoli Provisioning Manager, Tivoli Service Request Manager,
Network, Service Assurance and Events: Tivoli Netcool Impact, Tivoli Netcool Performance Flow Analyzer, Tivoli Netcool Performance Manager, Tivoli Netcool/OMNIbus, Tivoli network Manager, Tivoli Network Manager (Precision and NetView/d),
Asset Management: Asset Management for IT and Enterprise, Enterprise Asset Management Trends and IBM Maximo Industry Solutions,
Security: Tivoli Access Manager, Tivoli Identity Manager, Tivoli Federated Identity Manager, Tivoli Enterprise Acces Manager Single Sign On, Tivoli Compliance Insight Manager, Tivoli Directory Server, Tivoli Key Lifecycle Manager, Tivoli Security Information and Event Manager, Tivoli Security Policy Manager,
Storage: Tivoli Storage Flash Copy Manager on AIX and Windows, Tivoli Storage Manager, Tivoli Storage Productivity Center, Tivoli Storage Mangaer (TSM) Fastback,
z/OS: Netview for z/OS, OMEGAMON, Tivoli Security for Systems z: Tivoli zSecure Suite
Click here for more information.
I personally will be available from 8am to 2pm covering IBM Tivoli Storage FlashCopy Manager on Windows but there will also be many other storage experts available for the entire 12 hours. Please join us!
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  backup deduplication storage-blog space-managment archive hsm data-management data-reduction 2,190 Visits
Data Reduction Chapter 4: Categorize your data for migration & deletion
In the last chapter, we discussed eliminating the one of biggest causes of data growth the duplication of large amounts of data every time you perform a full backup. In this chapter, well explore the benefits of determining what different types of data you have and categorizing it so that you can manage it most effectively. This will help you set up policies to migrate of less frequently-accessed data to lower-cost tiers of storage, and to delete the data that you no longer need or want. By cleaning out your production storage, you will shorten your backup cycles, and improve application performance.
The next option for reducing the data storage footprint is to assess the different types of data and where they are in the data life cycle. If your organization is like most, you have all your unstructured data in flat file systems, which are probably full of data that you rarely, if ever, need to access. This may include data you are no longer required by law or policy to keep, but that you havent deletedsuch as old e-mails and memosthat could prove costly if discovered in legal proceedings.
The goal is to identify what data can be moved to less expensive tiers of storage, and what data can be deleted entirely from the environment. This will reduce the need to buy more primary storage capacity and make it easier to manage and protect what you have. Backup and restore performance will improve, and it will be easier to prove that you are meeting data retention and expiration policies.
IBM offers IBM Tivoli Storage Productivity Center for Data for this purpose. This solution reports on where your data is, sorted by access or saved dates, who owns it, the application that created it, and numerous other filters. From the intelligence you gain from these reports, you can set meaningful policies in your data management software to automatically take the appropriate action on data that shouldnt be clogging up your primary systems. Tivoli Storage Productivity Center for Data can also help identify and eliminate duplicate data, orphan data, temporary data and non-business data.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 5, where well talk about automating the migration, archival and expiration of your data.
Shawn Jaques 1200007FSY email@example.com Tags:  green-it storage-blog storage-management storage energy-effeciency green storage-software energy 2,116 Visits
Living in Boulder, Colorado, I am constantly hearing about "green" initiatives such as recycling, composting, alternative transportation, etc. Over the past several years, my family has been doing a much better job of lessening our impact on the Earth through things such as recycling, buying environmentally friendly products and even signing up for energy saving smart grid technology.
I appreciate when corporations also do their part to reduce their environmental impact by leveraging greener technologies. But let's face it, most corporations act based on the impact to the bottom line (both real or perceived) rather than the impact to the environment. Companies like IBM can make the decisions easier for clients by building products that improve performance while reducing energy or other environmental impacts.
I'm proud when IBM delivers "green" technology and thus wanted to point your attention to this video about energy efficient storage. Craig Smelser, VP of Security and Storage Development at IBM Tivoli, introduces some of the storage challenges that can be addressed with energy efficient IBM storage software solutions.
For more information, click here
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management archive data-reduction backup deduplication hsm space-managment 2,028 Visits
Data Reduction Chapter 5 - Automated Data Migration
In previous chapters, we’ve talked about the need to reduce your data storage footprint in order to help survive the tidal wave of data, and the first steps in doing so include eliminating unnecessary duplication of data, and then categorizing your data so you can make smarter decisions on where to store it, and for how long.
In this chapter, we take the next step by automating these data management policies through three distinct processes: migration, archival, and expiration. The net result of these processes is to remove unneeded data from your production storage systems, which will reduce or delay your need to acquire more expensive hardware and reduce administrative costs, all without impacting key operational processes.
In the old days of computing and storage management, the concept of transparently moving data from one tier of storage to another was called hierarchical storage management, or HSM. Given IBM’s heritage in mainframes, we still use that term today. More recently, this concept morphed into Information Lifecycle Management (ILM), but it’s the same basic principle – move older, less-frequently accessed data off your most expensive storage devices onto slower, less costly storage media.
HSM and ILM solutions work transparently in the background, automatically selecting and moving files from primary to secondary tiers of storage based on the policy criteria that you set, such as file size or length of time since a file has been opened. They leave a pointer, or stub file, where the data was originally stored so that users and applications don’t need to worry about where the data was moved; the software transparently reroutes the request for any moved files. These solutions automatically move data to the proper media based upon policies you set, freeing up valuable disk space for active files and providing automated access to the migrated files when needed.
Data migration solutions help customers get control of, and efficiently manage, data growth and its associated storage costs by providing automated space management. These solutions should provide the following key features:
• Storage pool “virtualization” helps maximize utilization of the managed storage resources.
• Restore management is optimized based on the location of the data in the hierarchy.
• Migration is transparent to the users and to applications.
• Migrations are scheduled to minimize network traffic during peak hours.
• Automatic migrations occur outside the backup window.
• By setting proper threshold limits, annoying ‘out of disk space’ messages can be eliminated.
The IBM Tivoli Storage Manager (TSM) family includes two solutions for automating the migration of data between multiple tiers of storage. TSM 6 for Space Management is for AIX, HP-UX, Solaris and Linux data, while TSM HSM for Windows is for Windows servers.
Tivoli Storage Manager data migration solutions not only help you clean up your primary storage systems to help them run more efficiently, they can also be used to easily move data to new storage technologies as they are deployed. Migrating files to Tivoli Storage Manager also helps expedite restores, because there is no need to restore migrated files in the event of a disaster.
The benefits of Hierarchical Storage Management or Information Lifecycle Management include:
• Improve response times of file servers by off-loading inactive data
• Slow or even stop the growth of your production storage environment
• Use existing storage assets more efficiently
• Reduce backup times and resource usage by focusing on active files only
• Eliminate manual file system clean-up activities
In the next chapter, we’ll look at HSM’s big brother – archiving.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Richard Vining 2700019R2A email@example.com Tags:  hsm space-managment storage-blog backup archive data-management deduplication data-reduction 2,314 Visits
Data Reduction Chapter 6 - Archiving
I’m back with the next installment on ideas for helping you to reduce the amount of storage capacity you need for an ever-increasing amount of data, and the amount of time you spend managing it. The last chapter covered transparently automating the migration of data from primary storage to secondary systems. An extension of this thought is archiving.
Archiving is another important data reduction technique for certain types of data. One example of this would be financial reporting data (such as weekly, monthly, quarterly, annual data), that needs to be retained for future trending, requirements or auditing, but does not need to consume valuable disk space where live data should reside. Historical medical records and customer statements also often fit into this category.
Archiving is for long-term record retention. It differs from backup in that it keeps files for a specific amount of time (where backup keeps a certain number of versions of a file) while removing the data from the primary production storage systems completely.
Key features of IBM archiving solutions include:
Using IBM archiving solutions for records retention can help you:
IBM offers a choice of solutions for archiving, depending on customer preferences and the applications involved.
Tivoli Storage Manager 6 includes an archiving capability directly integrated into its client backup software. It is policy based, allowing the administrator to set retention times. If the requirement for how long a file must be retained changes, all the administrator has to do is update the policy, and the solution will retroactively update the already archived files; there is no need to restore and re-archive, as some competitive offerings require. Tivoli Storage Manager also offers the option of integrating data from many different applications into your archive repository, and the archive repository can be a virtualized pool of heterogeneous storage systems.
IBM Information Archive, which contains a specialized version of Tivoli Storage Manager called IBM System Storage™ Archive Manager, is a standalone archive appliance that ingests data directly from more than 40 applications including messaging, healthcare and medical imaging, design and engineering, document management, and others.
Database archiving with IBM Optim and Tivoli Storage Manager
IBM Optim™ Data Growth Solution is a unique database archiving solution that transparently migrates unneeded records from database tables to secondary storage. Like Tivoli Storage Manager’s space management and archive solutions, Optim provides database and storage administrators with a range of cost and performance benefits.
There are also benefits to using Tivoli Storage Manager in conjunction with Optim, which works seamlessly with Tivoli Storage Manager’s application program interface (API) to move archived database records directly into Tivoli Storage Manager’s storage hierarchy.
Optim can also be used with other file-based backup/restore products; however, this involves a two-step process to first archive the data and then back it up. When used with Tivoli Storage Manager, Optim automatically archives database records and then uses the API to store/archive data in a Tivoli Storage Manager storage pool hierarchy. With any other file-based backup/restore product, Optim uses standard file operations to store/archive data in a disk-based file system, and then the backup product can backup the file to supported backup media.
Using Optim and Tivoli Storage Manager together allows you to:
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 7, where we’ll talk about data deduplication and compression as the next options in an effective, holistic approach to reducing your overall data storage footprint.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  scv ibmstorage storage-blog protectier deduplication virtualization storage 2,950 Visits
You've probably heard your mother say "you never get a second chance to make a first impression". So, since today marks my first entry into the blogosphere, I wanted to hit a home run, providing not only some interesting perspective, but also some hard facts that readers can use to potentially save some time and money.
If you have been paying much attention to developments in storage and computing infrastructure in the last few years, you have noticed a significant trend toward virtualization. Servers aren't servers any more, they are virtual machines. Tapes aren't tapes any more, they are virtual tape libraries like the IBM TS7650 ProtecTIER Deduplication Appliance. And in the area of disk virtualization, the most widely adopted approach is the IBM SAN Volume Controller (SVC).
Up until now, disk virtualization has been an enterprise-wide thought. Storage managers who are tasked with taking care of hundreds of TB's, and often PB's of disks have for years turned to SVC to help eliminate the pain of migrating data between arrays. For these administrators, disk virtualization with SVC has also helped provide a common set of management interfaces and proceedures across storage from different vendors, and has helped to create a common set of services like thin provisioning, snapshotting, and mirroring across different tiers of storage.
Not every storage manager, though, is responsible for PB's, or even hundreds of TB's of storage. Most administrators are just looking for an affordable and 'easy to manage' means of satisfying the next request for more storage on Exchange, or SAP, or... About a month ago, IBM introduced some important changes in its mid-range disk virtualization product, SVC EE, designed with these storage managers in mind.
Perhaps the best way to describe these changes is with a picture... (Click on the picture to enlarge)
One of the challenges with traditional disk arrays is that they are relatively inflexible. Think about it... the arrays that have a lot of function (thin provisioning, excellent snapshotting, mirroring, etc.) are generally large, monolithic things that can take up a lot of real estate and burn a lot of power before you get to the first byte of storage. On the other hand, the arrays that are more modular -- allowing incremental growth -- generally don't offer the best software capabilities. And what's more, all of them generally charge an arm and a leg for the software capabilities they do offer.
The important thing IBM did was to package its virtual controller software in an affordable form factor and price it in such a way that mid-sized administrators can build and grow their storage infrastructure modularly. Do you need more disk capacity for a new application? Add an IBM DS3400 SAS disk enclosure. Do you have plenty of capacity but just want some more performance or connectivity? Add an SVC 8A4 controller pair. Do you have plenty of performance but just want some more capacity for archiving? Add a DS3400 SATA disk enclosure. With this sort of modular approach to scaling, the incremental cost of adding capacity can be greatly reduced.
Regardless how you choose to grow your virtual disk system, there are a valuable set of services that are all included in the base software license (e.g. no extra charge). They include:
Although I have used IBM DS3400 disk encousures in my example, a virtual disk system of unlimited size can be constructed using any number of IBM DS3400, DS4000 or DS5000 family disks. SVC EE can also virtualize up to 250 disks from other IBM or non-IBM disk systems.
Lower incremental cost for adding capacity. Efficient SAS and SATA disks. A valuable set of software functions included in the base price. Common management from the smallest configuration to the largest. Would that help save some time and money?
Richard Vining 2700019R2A email@example.com Tags:  hsm data-management backup storage-blog archive data-reduction space-managment deduplication 2,998 Visits
Data Reduction Chapter 7: Data Deduplication
As discussed earlier chapters, data deduplication is a hot technology that is used to reduce data storage capacity requirements. If you employ smart choices in backup and data management processes, you might not need data deduplication. But if you keep all of your inactive and unimportant data on your production storage systems, and use backup software that forces you to perform repetitive full backups of all that static data, then data deduplication can provide you with a huge benefit.
The basic idea behind data deduplication is to store just one copy of any data object, and place pointers to the single copy wherever duplicates are eliminated. Some solutions do this at a file level, so that the files have to be exactly the same to be deduplicated. This is often called single-instance storage (SIS). Other solutions deduplicate data at a fixed or variable block length. IBM’s solutions use a blended approach based on the size of the data—file-based for smaller files, and variable block for larger files.
Most deduplication solutions run a checksum algorithm against the selected data to create a hash signature, then check to see if that signature has ever been seen before. If it has, the data is discarded and a pointer to the already stored data is put in its place. A small number of high-end solutions perform a complete byte-level differential comparison of the data to remove all potential for “data collisions,” where two distinct data blocks may share the same hash signature.
Data deduplication can and does occur at many points in the data creation and management life cycle. In general, these points of deduplication can be broken into source-side, where the data is created, and target-side, where it is stored and managed. Backup applications, for example, can perform source-side deduplication by not transferring data that has previously been backed up over the LAN or WAN, saving on bandwidth.
On the target side, the most popular use of deduplication is in virtual tape libraries, or VTLs. These disk-based systems emulate tape libraries and drives, but apply deduplication to store equivalent amounts of data on disk very cost-effectively while providing performance advantages over tape. Performing deduplication on tape-based systems is considered to be a bad idea, given the portable nature of tapes and the need to recycle them over time; it would be very difficult to guarantee that you maintain the original data for all of the pointers that are out there.
Today, IBM offers two compelling data deduplication solutions. The Extended Edition of Tivoli Storage Manager 6 includes deduplication capabilities to eliminate duplicate data that has been backed up from multiple production systems. Again, TSM’s progressive-incremental backup methodology does not create massive amounts of duplicate data, so the deduplication is only effective when the same data exists on different systems.
The other solution is the IBM System Storage ProtecTIER® family of deduplication systems for reducing data coming from multiple sources, including Tivoli Storage Manager servers, backups from other backup systems, or archive software solutions.
A lot of customers ask when they should use TSM deduplication and when they should use ProtecTIER. I’ll cover this question in detail in my next blog, but the simple answer is:
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  storage-software tivoli storage storage-blog storage-management tsm 8 Comments 7,852 Visits
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
I encourage you to take a look at Windows VSS snapshots and FlashCopy Manager to see how they might help you. Enjoy!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  warming our mgm global summit gore tivoli pulse vegas climate las rational websphere management smarter copenhagen crisis choice ibm al service planet 1,538 Visits
In response to: Cooler Planet Crusader-In-ChiefI agree that the guest speaker selection makes a lot of sense with regards to building a smarter planet. It will be interesting to hear what Al Gore has to say.
Richard Vining 2700019R2A email@example.com Tags:  data-management backup deduplication data-reduction hsm archive storage-blog space-managment tivoli 3,508 Visits
Data Reduction Chapter 8: Deduplication with Tivoli Storage Manager 6, FastBack and ProtecTIER
So far in this series, we’ve detailed the challenges that the tidal wave of data is placing on storage administrators, and how a smarter, more holistic and comprehensive approach to data reduction is needed to survive in a way that let’s you do more with less.
We covered eliminating the largest source of duplicate data (full backups) and automating the migration, archiving and deletion of older data. Then, in chapter 7, we covered the basics of data deduplication. Now we’ll detail the differences between IBM’s deduplication offerings, and when to best use each.
Let’s talk first about the deduplication capabilities of Tivoli Storage Manager (TSM). This feature is included at no additional charge for TSM 6 Extended Edition customers. This solution can help to reduce recovery times by enabling you to store more backup data and recovery points on disk rather than tape. It works with the data from all sources – via normal backups, data imported via the TSM API, as well as archive and HSM data. TSM deduplicates your disk-based data pools as a post-process, so there is no impact on backup performance. After running, it automatically reclaims the storage that has been freed up.
TSM already eliminates the most common cause of duplicate data – full backups – so the reduction ratios you can expect from TSM’s deduplication solution are fairly modest – the average is about 40%. But when combined with its progressive incremental backup approach and built-in data compression, TSM’s effective data reduction rate is extremely competitive with any other solution on the market, as has been detailed in a commissioned report written by Enterprise Strategy Group (ESG), available here (fair warning – registration required – sorry):
Announced today, Tivoli Storage Manager FastBack v6.1 also includes target-side data deduplication to help reduce the capacity required in the FastBack backup repository, adding to its value as the leading near-instant recovery solution on the market for business critical Windows servers and remote/branch offices. Also announced today was Linux support and tighter integration with the Tivoli Storage Manager Integrated Solutions Console (ISC), delivering on IBM’s vision of true enterprise-wide Unified Recovery Management.
IBM System Storage ProtecTIER is a technology leader in performance, scalability, data integrity and reliability. In true apple to apple comparisons this solution is the fastest on the market in real customer environments. A single ProtecTIER system can easily scale in both performance (1000MB/sec) AND capacity (1PB of deduplicated data). ProtecTIER is one of the few solutions that doesn’t rely on a hash algorithm and performs a byte level differential to ensure data is a duplicate for enterprise class data integrity. And ProtecTIER features all IBM best of breed components versus inexpensive OEM'd parts found in competitive products.
ProtecTIER has been proven in very large production environments and is supported worldwide by IBM’s services operations. The TS7650 ProtecTIER Deduplication Family ranges from small (7TB) to medium (18TB) to large-scale (36TB) appliances. And the TS7650G gateway offerings allow you to add the storage of your choice, up to 1PB. Active-Active cluster configurations also provide high availability capabilities.
Video on ProtecTIER: http://www.youtube.com/watch?v=6Uk41HpCTqo&feature=related
Review - Choosing TSM or ProtecTIER for Data Deduplication
While TSM works very well in ProtecTIER environments, you wouldn’t use both TSM deduplication and ProtecTIER deduplication simultaneously. That would require twice as much work for no additional benefit. So when should you choose one over the other? Both solutions offer the benefits of target side deduplication: greatly reduced storage capacity requirements (especially when using TSM’s progressive incremental backup). You’ll have lower operational costs, energy usage and Total Cost of Ownership. You also get faster recoveries with more data on disk.
Use TSM 6 built-in data deduplication when you desire that deduplication operations be completely integrated within TSM. You want the benefits of deduplication without the costs of separate hardware or software – it ships for free with TSM 6 Extended Edition. Or you desire end to end data lifecycle management with minimized data store requirements.
Use ProtecTIER when:
• You need the highest performance up to 1000 MB/sec or more
• You have a large amount of data and need scalable capacity and performance
• You need inline deduplication to avoid the operational impact of post processing
• You are deduplicating across multiple TSM (or other backup) servers
• You don’t have TSM and are performing weekly full backups.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 9, where we’ll summarize IBM’s holistic approach to data reduction and show you how we can help you survive the tidal wave of data.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli continuous-data-protectio... ibmstorage tsm-fastback ibm tivoli-storage-manager-fa... data-protection 1,768 Visits
New Product Announced Dec. 15, 2009
IBM Tivoli Storage Manager FastBack for Workstations is an automated, continuous data protection and recovery software solution for desktop and laptop computers, with central management for thousands of systems, and integration with other Tivoli Storage Management offerings.
Here is the URL for this bookmark: http://www-01.ibm.com/software/tivoli/products/storage-mgr-fastback-workstation/
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  information infrastructure tivoli management ibm 2010 dynamic websphere pulse rational service-management 1,782 Visits
In response to: The BIG Questions at PulseThose are great questions.
Additionally, you should consider asking yourself these questions that relate to, "What's the Value of this Data to the organization?"
1. Do you have a plan for recovery of that data if lost or corrupted?
2. How fast is that data growing and how are you dealing with the growth?
3. How are you providing increasing service levels with lower cost?
By attending the Storage and Information Infrastructure track at Pulse 2010, you'll find the answers to the questions I've added along with answers to any additional questions you may have concerning your storage, data, and information management.
Take a look at the video below and see how Tivoliman Tames the Data Juggernaut Beast.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  deduplication hsm backup data-reduction tivoli storage-blog space-managment data-management archive 1,430 Visits
Data Reduction Chapter 9: Surviving the tidal wave of data with IBM data reduction solutions
I hope everyone had a safe and enjoyable holiday, and I’m looking forward to an exciting and prosperous new year. I’d like to take this opportunity to summarize the topics I’ve been covering in this series of data reduction blogs, and give new readers links to the specific topics that you might be interested in.
Please ask yourself these questions:
Through this series, we’ve shown that IBM is the only vendor with a comprehensive set of data reduction solutions that can be applied at multiple points throughout the data creation and management lifecycle. IBM’s broad portfolio of data reduction solutions gives us the freedom to solve your data storage and management issues with the most effective technology for your particular situation. And IBM is continuing to invest in research and development to further develop and deliver the advanced features our customers are requesting.
To learn more, please download my new Data Reduction whitepaper, view the on-demand webcast with Nick Allen, or visit the Data Reduction solution site.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
John Foley 12000084U0 FOLEYJOH@US.IBM.COM Tags:  storage-management tivoli ibm storage-blog patel milan cloud ibmstorage video storage-cloud 2,376 Visits
Cloud storage technologies made impressive strides in 2009, and the trend looks to build on that momentum in 2010. IBM is expecting steady growth in cloud storage deployments, especially in the areas of test environments, Web serving, and other non-mission-critical scale-out storage needs.
Standards in this area are just beginning to be discussed and will also be evolving in 2010. Standard file protocols such as CIFS and NFS are obvious starting points for cloud storage access, but other approaches utilizing object storage techniques have also been proposed.
To prepare for cloud storage within the data centers, IT managers will need to determine a small number of focused areas to use as starting points. In the short term, cloud storage is a technology that will be deployed to address specific and unique requirements across the enterprise. Therefore, it is recommended to carefully choose areas to pilot the technology where managers can gain insight into where they can extend usage into other areas and build skills for when it becomes more widely deployable.
Watch this video to gain important insight into what it takes to deploy and manage high available Cloud Storage environments.
Click here for additional information about IBM Cloud Storage Solutions
Ron Riffe 100000EXC7 email@example.com Tags:  tivoli storage-blog storage-management tsm tpc recovery-management ibmstorage fastback storage-software virtualization storage svc 1,758 Visits
Yesterday, in discussing IBM's fourth quarter 2009 financial results, IBM CFO Mark Loughridge had this to say about Storage Software:
"Tivoli storage continued its robust growth as customers manage their rapidly growing storage data. Data Protection as well as Storage Management grew double digits, with broad-based geography and sector growth."
If you are already benefiting from IBM Storage Software - Thank you!!
If you haven't yet started taking advantage of IBM Storage Software, come visit us.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli information-infrastructur... storage virtualization pulse ibmpulse pulse2010 data-availability data-management tsm backup-recovery dynamic-infrastructure data-recovery ibmstorage ibm data-reduction storage-blog data-protection tivoli-storage 2,626 Visits
With only 4 weeks until Pulse 2010 - The Premier Service Management Event - Optimizing the World's Infrastructure, I thought it might be helpful to provide some details around the sessions and activities that will be available to all of you storage and information infrastructure enthusiasts out there....
Here are a few sessions that you can attend each day. Sign up for these sessions and others today (requires only an IBM.com password - you do NOT have to be a Pulse registered attendee to create a Pulse schedule online)!
Go to the on-line agenda tool to see additional Storage and Information Infrastructure sessions that may be of interest to you. There are also sessions in the Expo Theater Stream.
Register and attend Pulse to take full advantage of all that will be offered:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  appliance tivoli amazon software management midmarket system service monitor tfam saas ami pulse foundations ibm midmkt tivfdt fsm 1,432 Visits
In response to: Service Management for Midsize Businesses at PULSEIt's great to see that IBM Tivoli Storage Manager FastBack will be showcased as the back-up and recovery solution for Midsized businesses and included in the Service Management for Midsized Business Track at Pulse 2010. Also be sure to check out the Expo to see the IBM Comprehensive Data Protection Solution Express demonstration
Oren Wolf 270002KMMG firstname.lastname@example.org Tags:  backup tivoli hyper-v tsm storage-software storage backup-recovery storage-management vmware storage-blog 1,795 Visits
I don't know about you, but I have been virtualizing like crazy over the last few years, humongous servers have been turning into medium sized virtual machines, test and lab environments had turned into small files running on my laptop from a flash drive.
My IT department have been virtualizing even more, consolidating servers, sharing storage resources among multiple machines and converting NICs (Network Interface Cards) into virtual switches (I still haven't figured out how they did that).
The move into a virtualized environment is very useful for reducing energy consumption, decreasing physical server and storage foot print and driving up processor and storage utilization but it also has some side effects when it comes to data protection.
The problem begins at the same place that drove us into virtualization to begin with, resource sharing, You may now have 10 virtualized servers running on the same physical host, if your backup process consumed only 5% CPU and IO on your physical server, imagine what would happen if all 10 virtual machines kick off the backup process at the same time...
There are multiple valid approaches for providing data protection to those virtual machines and I’ll try to address each and every one of them in upcoming blogs…
Other enhancements that might not necessarily be backup related but have to be seriously considered when virtualizing include
Richard Vining 2700019R2A email@example.com Tags:  flashcopy storage-blog instant-restore recovery-management snapshot tivoli data-protection 2,447 Visits
IBM will be providing a series of live web-based demonstrations dedicated to showing you the value of IBM Tivoli Storage FlashCopy Manager. It will also show you how the product works. These will be DEMOS of live code.
Organizations seeking to improve protection for Business Critical Application Data can leverage Tivoli Storage FlashCopy Manager to simplify management through integration with IBM Storage Hardware Advanced Snapshot technology. As the first event in a series of Customer Web Conferences we will focus on demonstrating the features of Tivoli FlashCopy Manager for the Microsoft Exchange Environment.
Audience: IBM Customers and their associated IBM and Business Partner Sales representatives
Key Features of Tivoli Storage FlashCopy Manager include:
Hosts: John F. Miller, IBM North American Sales Executive
Neil Rasmussen, IBM Software Designer for Tivoli Software
Tivoli Storage FlashCopy Manager Demo Schedule
All calls will begin at 12noon ET for 1 Hour
Web Conference details:
Audio Conference Call details:
To learn more about IBM Tivoli Storage FlashCopy Manager, please visit:
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  fastback demonstration instatnt-restore large-eterprise disaster-recovery midmarket tivoli storage-blog storage-software data-protection backup ibmstorage microsoft-exchange branch-office continuous-data-protectio... recover 4,854 Visits
Live Demo! IBM Tivoli Storage Manager FastBack and IBM Tivoli Storage Manager FastBack for Exchange Scheduled Dates in 2010
Mark Your Calendars!
IBM will be presenting a series of live demonstration dedicated to showing the value of IBM Tivoli Storage Manager (TSM) FastBack and TSM FastBack for Exchange data protection products.
These additions to the TSM product family offers the ability to meet aggressive Recovery Point and Recovery Time Objectives in an organization's data protection service.
The TSM FastBack family provides many advanced features including:
Instant Restore allows users to access to their data or application immediately, while the restore is taking place.
Continuous Data Protection sends backup data continuously which allows a recovery to be done to any point in time.
Incremental Forever Backups prevents wasting time and money in performing and storing unnecessary full backups. Each backup appears to be a Full backup, but only the blocks that have been modified are copied.
FastBack Mount allows access to backed up data without it being recovered. This enables data to be validated after backups, the correct data to be identified before it is recovered, or data to be opened and its contents
to be recovered at a more granular level, thus reducing the size and time of the recovery.
Exchange Brick-level Recovery allows individual Exchange mail objects to be recovered from a previous backup without requiring an entire Exchange Database to be recovered. TSM FastBack for Exchange does not
require additional backup processing to provide IMR.
Branch Office Disaster Recovery allows replication of branch office backup data to a central site. This data can be compressed and encrypted during the transfer. The replicated data at the central site can be used
as the source for creating a tape copy of the data or for recovering branch office data and hosts. TSM FastBack allows the backups and replication of multiple branch offices to be monitored with a single tool.
TSM FastBack Bare Machine Recovery allows hosts to be quickly recovered, even to dissimilar hardware.
These demonstrations are open to Customers, Business Partners and IBM employees.
TSM FastBack Demo Schedule for 2010:
Demos will be available in English and Spanish. All English calls will be at 10:30 AM and 3:00 PM Central Time on Thursdays.
All Spanish calls will be available at 1:00 PM Central Time on Wednesdays.
There are Web Conference and Audio Conference components to this demonstration.
Conference ID is FASTBAK
Prior to the web conference, we suggest you do the following:
1) go to www.sametimeunyte.com
2) click on Support
3) click on Lotus Sametime Unyte Meeting System Check
4) Select attendee type and click Next
5) Proceed with the system check and install any plug-ins required.
English Live Demo Audio Conference:
Title: TSM Fastback LIVE Demo
Toll Free: 800-857-4143
Spanish Live Demo Audio Conference:
USA- Toll Free: 888-359-3613 Toll: 719-325-2348 T/L: 650-2012
Argentina-0800 666 2982; Australia-1 800 138 721; Austria-0800 291 390
Belgium-0 800 77 128; Brazil-0800 891 4391; Bulgaria-00 800 1100 178
Chile-123 0020 9673; China, Northern-10 800 714 1159; China, Southern-10 800 140 1141
Colombia-01 800 518 0760; Costa Rica-0800 015 0597; Czech Republic-800 900 705
Denmark-80 884 789; Finland-0 800 1 119654; France-0 800 902 956
Panama-08 600 205 3173; Peru-0 800 53 354; Philippines-1 800 1110 0845
Portugal-800 819 688; Russia-810 800 2679 1012; Singapore-800 101 1954
Slovenia-0 800 80158; Spain-900 967 691; Sweden-02 079 3083
Switzerland-0 800 563 064; Thailand-001 800 156 205 5311; Trinidad and Tobago-1 800 205 5311
United Kingdom-0800 028 9769; Uruguay-0004 019 0176; Venezuela-0 800 100 5265
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  midmarket saas foundations appliance service management ami software amazon tfam fsm system monitor midmkt tivfdt ibm tivoli 1,724 Visits
In response to: The Pulse Roadmap to Mid-Market Solutions and OpportunitiesIBM Tivoli Storage Manager FastBack is a great continuous data protection, backup and recovery solution for both midmarket and large enterprise organizations, for branch offices or data centers. For more storage sessions while at Pulse 2010 check out this blog post https://www-950.ibm.com/blogs/tivolistorage/entry/the_pulse_roadmap_to_storage_expertise?lang=en_us
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  storage-management flashcopy fcm storage-software tsm storage 1,645 Visits
The last time I blogged I was telling you about IBM Tivoli Storage FlashCopy Manager on Windows and just how cool it was. Well, I am working on some more neat stuff and I wanted to tell you about a beta program for upcoming release of FlashCopy Manager. It is called the Beta program for IBM Tivoli Storage FlashCopy Manager. If you want to test some of the new functions and features of the upcoming release of IBM Tivoli Storage FlashCopy Manager, please contact Mary Anne Filosa (email@example.com) or your IBM Sales representative to get details.
The enrollment period is ending soon, so don't wait to be a part of the action!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog data-reduction ibmstorage data-management dynamic-infrastructure pulse2010 pulse data-recovery data-availability ibm backup-recovery storage-software data-protection ibmpulse tivoli virtualization service-management 2 Comments 2,420 Visits
The count down is on... with only 2 weeks left to Pulse 2010, I wanted to give you and update on additional perks you'll have access to if you register and attend.
Meet the Experts!
Talk one-on-one with Product Experts
Visit the Expo!
Share Your Story
This year at Pulse 2010 we are scheduling video tape interviews with clients who are willing to share their thoughts on what they are doing to achieve visibility, control, and automation in their infrastructure. We will be filming client videos at Pulse starting Sunday, February 21, through Wednesday, February 24. The content will be used to produce short videos that we will leverage to tell the needs clients are addressing in their organizations. Our customers have been sharing their stories throughout 2009 as you can see below. Interested in participating? Notify me at firstname.lastname@example.org
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  space-managment midmkt archive storage-blog data-reduction backup midmarket tivoli data-management deduplication hsm mid-market 1,147 Visits
In response to: See Tivoli Storage Management - and me - in action at Pulse 2010Rich, thanks for the recap of some important sessions that will be presented at Pulse. Additional Storage and Information Infrastructure tracks can be found on the Tivoli Storage Blog.
John Foley 12000084U0 FOLEYJOH@US.IBM.COM Tags:  smart-archive storage-blog ibm information-archive ibmstorage ibm-storage 1,714 Visits
The end of last year was pretty hectic for a lot of us and you might not have attended IBM's "Information on Demand Gala" but as a refresher, we introduced our Smart Archive Strategy. Several of my customers have been asking for a refresher on the topic and we've just posted a short video describing this comprehensive approach that combines IBM software, systems and service capabilities designed to help you extract value and gain new intelligence from information by collecting, organizing, analyzing and leveraging that information. For more information, watch this video, visit the IBM Smart Archive Strategy Website and meet me at Pulse 2010 by attending the Storage Track sessions to discuss your specific archiving needs.
Richard Vining 2700019R2A email@example.com Tags:  data-management data-reduction hsm deduplication space-managment recovery archive tivoli backup storage-blog 2,971 Visits
On 19 March 2010, IBM will release Tivoli Storage Manager V6.2, the next in a long line of enhancements to the leader in enterprise-wide data protection, unified recovery management and effective data reduction. Highlighting this release is the addition of source (client-side) data deduplication, tighter integration with TSM FastBack, enhanced support for virtual server environments, automatic deployment of Windows client upgrades, and improved automation and performance of back-end data management processing.
Full details of this announcement are in the TSM 6.2 Announcement Letter
Oren Wolf 270002KMMG firstname.lastname@example.org Tags:  virtualization storage storage-software storage-blog data-protection backup-recovery data-recovery backup 2,414 Visits
on my previous blog i've discussed some of the viable approaches to data protection with virtual machines, before i dwell into the pros and cons of each approach i'd like to discuss the fundamental differences between file level and block level backup (and solicit your input :-) ).
Encapsulation is one of the basic rules for software design, simply put, it's the computer geek's equivalent of the famous "Don't ask, don't tell" policy. The idea is pretty simple, let's assume our File System is component A and our Disk System is component B. Component A and B publish a public interface that others can use, but they hide their internal mechanisms from the other components. This enables us to do some nifty tricks, such as RAID, as far as the file system is concerned it is working with a "regular disk", it is unaware of the fact that our disk system had actually taken the 100GB disk space that we defined and partitioned it into multiple strips that are actually located across 5 different disks in order to provide it (the FS) with better performance and hardware fault tolerant. There are other ways where this principle is used but you have to agree that it comes in pretty handy.
But, why do i even mention "encapsulation", and how is that relevant to File VS Block level backups?
The point i am trying to make is that the Disk level is not aware of the "file contents"and the File System is not aware of the "disk layout", this actually dictates the pros and cons of those two very different approaches to data protection.
With file level backups it's really easy to define which files you want to protect, than when the time comes, someone has to access the files and move the data they contain to some sort of data repository, in order to do that you must deal with issues such as:
- Open files
- Interdependencies between multiple files
- Identifying which (sub)files have changed
- For structured data (databases etc.), do we backup the entire file (or file group) or only the portions that have changed?
Block level backups are usually pretty straight forward, there's a mechanism that keeps track of the changed in "realtime" (this usually enables CDP, but that's a whole different story) and when the time comes the data will be moved to the data repository, but this technology has its own challenges
- Minimum granularity is usually a volume
- Hard to exclude unused file data (page file?)
- Recovering files from a block level backup
- Communicating with applications (and File System) to ensure backup consistency.
Generally speaking block level backups have a "lower overhead" than file level backup, so, if you decided to virtualize your environment and keep using agents on the individual virtual machines, you would probably want to use a block level backup solution. File Level backups are still viable (especially if they skip the "indexing" process by using an FS filter or journaling and allow for "sub file" incremental backup), but you will need to be more careful when planning your backup windows in order to prevent VM sprawl.
Stay tuned, next we'll discuss other approaches such as proxy backups
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli pulse2010 ibmstorage archive partners pulse storage-software infromation-archive ibmpulse storage-blog 1 Comment 1,777 Visits
Pulse kicked-off today with the Business Partner Summit. I attended the IBM Information Archive session where the partners attending and I learned about the Archiving Ecosystem and how IBM Infromation Archive helps: reduce costs, improve productivity / effeciency and reduce risks. Information Archive is a simplified, cloud-ready smart business system.
Some important questions to help understand whether or not an archiving solution is needed include:
The partners in the session had a lot of great comments and questions.
If you are a partner and you were unable to attend the IBM Information Archive session (or you attended but want to hear more) you can attend the other sessions that are scheduled at Pulse:
A technical look inside IBM's next generation archive appliance Tuesday 3:30-4:30pm @ RM 120
IBM's Smart Archive Strategy Simplifying Information Retention Tuesday 5:00-6:00pm @ RM 120
Birds of a Feather: IBM Smart Archive Strategy Discussion Tuesday 6:00-7:00pm @ RM 120
Next stop for me is the Pulse 2010 Business Partner Summit General Session!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm tivoli rational las-vegas management service pulse olympics 1,178 Visits
In response to: Pulse: The Olympics of Service ManagementIt's amazing that som many countries are represented at Pulse 2010. I wonder how many of them are here to learn about storage....
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmpulse pulse pulse2010 storage storage-blog information-infrastrucutr... 1,201 Visits
Pulse 2010 got off to a great start with a very successfule Business Partner Summit. There were several Storage partners that attended the Storage Breakout session. We were even able to get some of them to sign up for professional video interviews...
During the Tivoli Storage Software Strategy and How to Sell It! session Dan Galvan VP IBM Systems Storage Marketing gave an overview of the Smarter Planet initiative and Ron Riffe provided an indepth presentation on the storage software portfolio. Partners were informed of three solution plays that they can focus on for storage. There were many questions that were asked and answered.
We also provided details on how our partners can stay connected during and after Pulse with IBM Storage networks and social media. These networks are also available to our customers and our partners' customers.
Tivoli Storage Blog for getting conference updates and daily highlights from Pulse 2010. This blog is used to discuss many different topics like data reduction, virtualization, new product announcements and more..
IBM Storage Community for manageing your contacts at Pulse, sharing links and bookmarks, and providing feedbak on the conference
IBM Storage on Twitter for listening and contributing to realtime buzz with other Pulse attendees and organizing meetups. Use #ibmpulse in your tweets. You can also follow me on twitter
IBM Storage LinkedIn Group for connecting and networking with individuals interested in IBM Storage
Storage Management on YouTube for viewing and commenting on live stories with Pulse attendees and viewing other storage videos.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmpulse storage storage-blog pulse2010 pulse information-infrastructur... 1,291 Visits
Today (Monday) was an action packed and exciting day.
The day started off with the General Session where Al Zollar, the General Manager of Tivoli started off with the discussion around Smarter Planet and how the world is getting smarter - Instrumented - Interconnected - Intelligent. He gave several examples of how companies are shifting to become smarter, smarter buildings, smarter healthcare, smarter citeis etc. By becoming smarter Al explained that both Risk and Complexity can be reduced.
I enjoyed hearing about Capital Region of Denmark and how they have over 1.5 Billion bytes archived and they revolutionized their storage management so that they manage all that data with 4 staff members.
The presentation then went into Integrated Service Management for Data Centers, for Design & Delivery, and for Industries which consists of
and the importance of... Visibility, Control, and Automation!
There were also some new storage announcements made in the general session (stop by the expo to see the demo of each product):
The other speakers included Rational General Manager Danny Sabbah to dive deeper into Integrated Service Management for Design & Delivery, Laura Sanders Tivoli Vice President of Strategy & Development for an entertaining demonstration with live code showing a smarter city (accompanied by Dave Lindquist IBM Fellow, Vice President & CTO, Tivoli Software and Dr. Wing To Vice President Strategy and Product Management, Tivoli Software). After the demo the last IBM speaker was Mike Rhodin and he went more into more depth around Integrated Service Management for Industries.
The guest speaker to wrap up the General Session was former Vice President of the United States, Al Gore.
After the General Session, the rest of my day is a blur. It was filled with attending the Storage & Information Infrastructure track kick-off, meetings with customers and business partners to do impromptu video interviews/podcasts, tweeting, reporting storage highlights for the Pulse Points daily newsletter, checking out the expo and the demos and scheduling more video interviews. I had to have walked at least 6 miles today with making trips to and from the conferenece center several times. I was a little bummbed that I wasn't able to attend as many of the customer case sessions in the storage track, I'll have to make up for that tomorrow.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage ibmstorage storage-software tsm partner storage-cloud pulse logicalis ibmpulse pulse2010 tivoli storage-blog 1,698 Visits
Today I did several live video interviews. Let me be honest with you, it is clear that I wasn't meant to be in the journalism profession, uhm, now that is the truth!
I met many IBM clients and business partners through out this week at Pulse and today I did an interview with Roger Finney from Logicalis which is an IBM Business Partner. We did this interview right outside the expo hall at the MGM Grand hotel, so you can hear the airplanes going over from McCarran International Airport.
Logicalis has been an IBM Business Partner for over 14 years and they are both Software Value Plus authorized and Tivoli Accredited. In this video, I ask Roger to provide some details about how Logicalis has helped their customers with their storage management needs.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog oakwood pulse virtualization ibmpulse storage-software pulse2010 1 Comment 1,683 Visits
I had the pleasure of interviewing one of our client speakers, Brian Perlstein from Oakwood Healthcare System. Brian will be presenting the Oakwood Healthcare System's Virtualization story on Wednesday Feb. 24th at 9:30 to 10:30 am in the Conference Center room 121. Hope to see you there!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  vcuhs ibmpule pulse2010 pulse storage-software backup-recovery tivoli ibmstorage storage-blog 2 Comments 2,095 Visits
Yesterday I interviewed Greg Johnson, CTO and Director of Technology and Engineering Services for Virginia Commonwealth University Health System (VCUHS). Greg presented at Pulseon Tuesday and he discussed how VCUHS is transforming IT in a healthcare environment focussing on their storage solutions and backup and recovery solutions. If you weren't able to attend Greg's session on Tuesday at 2:00 - 3:00 pm in the Conference Center room 120, watch the video below and you'll see a high-level recap of what he presented.
Once again, this was a live interview from outside the expo hall in the MGM and the McCarran International Airport, sure is one of the busiest airports in the world... maybe I should have done my interviews inside the conference. I enjoyed the fresh air and the airplanes in the background just adds to the beauty of the live interview. I still think that Journalism is a field that I will not be pursuing... hopefully my interview skills will improve before Pulse 2011, which will be Feb. 27 - Mar. 3 2011.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  pulse2010 ibmpulse pulse ibmstorage storage storage-management 1,450 Visits
It's been almost 2 weeks since Pulse 2010 in Las Vegas and I'm still playing catch up. I finally finished loading all the pictures I took while at Pulse and last week I finished uploading all my YouTube videos. Check out the IBM Pulse Conference Flickr Group for all the Pulse 2010 photos - the ones I loaded are from tiffwdms. Checkout the Pulse 2010 YouTube videos and for all the Storage YouTube videos you can go directly to the Storage Management Playlist.
If you were unable to attend Pulse 2010 in Las Vegas you can attend the virtual event on March 16, 2010. Register here.
I want to share with you a few of the expert videos that I captured while at Pulse.
Kathy Mitton - Tivoli Storage Expert
Jason Perkins - Tivoli Storage Expert
Rajendran Subramaniam 060000D5Y9 email@example.com Tags:  pulse2010 storage-pulse2010 storage-blog ibmstorage ibmpulse pulse storage xiv 1,267 Visits
In response to: Viral Friday - Video - Pulse 2010 - XIV Demo in the ExpoNice Video.
I added it to the Storage YouTube channel playlist at http://www.youtube.com/view_play_list?p=40B9D25D29105511
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog tivoli service-management pulse ibmpulse pulse2010 storage 1,308 Visits
While I was at Pulse 2010 in Las Vegas, I had the pleasure of meeting and interview Nils Lau Fredriksen, CIO for the Region of Southern Denmark. Nils was one of the five CIOs that participated in the CIO panel during the Day 2 General Session. It was very interesting to hear his experience with implementing integrated service management along with the other CIOs that were on the panel.
Nils went into more depth during his presentation, on Wednesday Feb. 24th, regarding his experience of implementing integrated service management (or what he calls quality management) at The Region of Southern Demark. I attended the session and there were many questions from the audience.
I met up with Nils after his presentation to get a quick interview, which you can watch below...
or in Danish:
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-reduction retention deduplication data-protection service-management unified-recovery-manageme... risk-management storage-blog backup recovery compliance tivoli archive business-continuity restore disaster-recovery 1 Comment 3,237 Visits
Unified Recovery Management for a Smarter Planet
Welcome to my new blog series which will focus on simplifying the lives of storage and backup administrators. In this first installment, of course, I’ll start laying out the problem as I see it. Hopefully, you’ve seen all the many IBM Smarter Planet commercials on TV over the past year. The basic story is that the planet is smaller and flatter than it used to be, and is more connected economically, socially and technically. I don’t think anyone would argue that information technology has dramatically changed the way people, businesses and governments interact across the planet.
This is because everything is becoming more instrumented, interconnected and intelligent. Cars are talking to sensors embedded in roads, mobile equipment is tracked via GPS, machines of all types are predicting the need for maintenance and calling home to schedule a service call, groceries are talking to store shelves, and intelligent power meters are reducing the waste in transmission systems.
As former U.S. Vice President Al Gore said in his keynote at Pulse2010 last month, “Traffic on the Internet is now dominated by things communicating with things, rather than people communicating with people”.
The result of all this interconnectivity is the creation of enormous amounts of digital information – data. This data is being used for incredible things: finding cures for many diseases, finding oil and gas in new places, dynamically reducing traffic bottlenecks, preventing crime, and improving the delivery of health care – all while reducing costs. But what the commercials fail to mention is that someone has to manage all this data – it has to be stored, protected, available and reliable. The traditional response to data growth has been to throw more capacity at the problem, but this is no longer the ‘green’ thing to do. While the costs of storage capacity continue to decrease, the costs of housing, cooling and managing storage now consume the majority of most IT budgets. We need to find smarter ways to manage more data, ways that require less infrastructure, less power, less people, and yes, less money.
The environments that all of this data reside in are becoming incredibly complex, leading to an unmanageable patchwork of tools and processes that storage administrators have to use in order to attempt to meet the service level needs of their organizations. For example, the numbers of different hardware platforms, operating systems and applications are expanding (and of course each new application is more important than the last), while the places where data is being created and stored are multiplying. And way too many different things can go wrong, each demanding a different type of response.
I’ll be diving deeper into this complexity in the next installment, and later in the series will describe what IBM is doing to help simplify your life. Get ready for the Smarter Planet, and for managing all the data that it’s creating!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Rajendran Subramaniam 060000D5Y9 email@example.com Tags:  solid-state-storage ibmstorage pulse pulse2010 storage ibmpulse storage-pulse2010 demo 1,086 Visits
In response to: Viral Friday - Pulse 2010 - Solid-state Storage DemonstrationThanks for this nice video
Rajendran Subramaniam 060000D5Y9 firstname.lastname@example.org Tags:  storage disk ibmpulse pulse nas nseries ibm storage-blog netapp ibmstorage pulse2010 1,255 Visits
In response to: Viral Friday - Pulse 2010 - N series DemonstrationThanks Tiffeni for the nice video.
Richard Vining 2700019R2A email@example.com Tags:  data-reduction archive risk-management restore tivoli service-management deduplication compliance storage-blog recovery unified-recovery-manageme... business-continuity backup data-protection retention disaster-recovery 2,071 Visits
In chapter 1, I described how the planet is becoming ‘smarter’ and that this transformation is creating enormous amounts of new data that needs to be effectively managed. In this chapter, I’ll review some of the things that complicate the effort to ensure all this data is properly retained, protected and available when needed.
Ideally, you would like to have a single tool that does everything, across the entire enterprise, providing the ability to effectively respond following any type of event. While many vendors promise to solve your problems, nobody can provide this capability in a single package – the problem is just way too complex. But (tease), IBM is driving toward a unified recovery management capability that enables you to manage a selection of tools from a single administrative interface. More on this next week; first we need to ensure that we appreciate the complexity.
The first category is infrastructure – where is the data?
Your IT shop probably includes several if not many types of hardware: computer platforms such as x86, Power, RISC, Sparc, mainframes, etc. And there are a wide array of storage platforms, including direct-attached (DAS), network-attached (NAS), tape, and I’m sure many of you still have optical disks somewhere. And from many different vendors!
On these platforms, you’re going to have different operating systems: AIX, HP-UX, Linux, Solaris, VMware, Windows, z/OS. Then they’re going to be physically located in different places – data centers, staff offices, production facilities, remote/branch offices, disaster recovery sites, and warehouses.
Different types of networks, and the available bandwidth on them, further complicates the system. You have local-area (LAN), wide-area (WAN), storage-area (SAN) and metro-area (MAN) networks; additionally you may have cable networks running to some offices (particularly home offices), telecommunications networks that now carry data, and USB connections to some storage devices. And finally, you likely have important data being created and stored on user workstations.
How many tools do you use just to cover this level of complexity? But wait – there’s more! The next question is: who owns the data?
We also have to matrix in the different type of applications you have – general file systems; email, instant messaging and collaboration systems; databases such as B2, Oracle, SAP, SWL and mySQL; and your industry-specific mission-critical applications such as CAD/CAM, medical records management, software development, manufacturing resource planning (MRP), or customer relationship management (CRM).
Now consider that the data created and used by any of these applications may be on any hardware platform and operating system, in multiple locations, using a variety of networks. A lot of the data may be on user workstations as well. Oh my!
But there’s still more – what can go wrong?
As I noted in my last blog, lots of things can go wrong, any you really need a different response for each of them:
OK, now draw a line from every block on the diagram above to every other block and tell me what your backup and recovery plan is for every line – even in this simple diagram, there are 100 different scenarios, but when you consider all the variables, it may be millions. What tools would you use, who will use them, how long will it take to recover, and how much data will be lost? And what does it cost?
More on that next time!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
In response to: Storage Consolidation with SONAS and TSMRich,
I love how you used your winter/summer clothing exercise as an example of consolidating storage and the pros and cons of having a large storage container vs. multiple smaller containers. I also put my off season clothes in containers and should have started moving in the spring/summer attire and move out the winter... maybe next weekend.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm ibmpulse tivoli-storage-manager storage storage-blog bare-machine-recovery tsm recovery ibmstorage pulse pulse2010 1,699 Visits
While I was at Pulse 2010 in Las Vegas, I had the opportunity of Interviewing Scott Sterry from Cristie Software Limited. Cristie Bare Machine Recovery integrates with IBM Tivoli Storage Manager to provide a Bare Machine Recovery (BMR) solution for Windows®, Linux, SUN Solaris and HP-UX.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  virtualization tivoli storage resource-mgmt 983 Visits
In response to: Managing the tidal wave of data with IBM TivoliThanks for posting the white paper. For more infromation about Tivoli Storage visit the blog at http://www.ibm.com/blogs/tivolistorage
Rajendran Subramaniam 060000D5Y9 firstname.lastname@example.org Tags:  archive ibmpulse storage ibm pulse storage-blog information-archive ibmstorage pulse2010 1,159 Visits
In response to: Viral Friday - Pulse 2010 - Information Archive DemonstrationThanks for the video. Also here is a link to the Smart Archive http://www-01.ibm.com/software/data/smart-archive/?cm_sp=MTE9840
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage ds5000 xiv hardware storage-management ibmstorage storage-blog o systems-storage midmarket software storage-software ds8000 sap 2 Comments 3,335 Visits
In the second half of 2009 the International Technology Group (ITG) was contracted to do a detailed analysis of IBM and competitive storage offerings for SAP to determine a three year total cost of ownership (TCO) for each product included in the comparison. ITG developed two comparisons one for Large Enterprist accounts and a second for Midmarket accounts and chose approppriate competitive offerings for the comparisons. For the Large Enterprise accounts ITG includes: EMC V-Max systems vs. IBM DS8000 Systems and HP XP2400 vs. IBM XIV Systems. For the Midmarket accounts ITG includes: HP Enterprise Virtual Array (EVA) vs. IBM DS5000 Systems and HP EVA vs. IBM XIV Systems. ITG developed three year TCO comparisons and provided IBM an Executive summary and Detailed analysis report that can be share with customers.
Read the outcome of the analysis:
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Management Brief
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Management Brief
ITG also participated in a Webcast that is available for replay discussing the results of their studies of comparative disk systems cost for SAP environments in large and midsized organizations.
Richard Vining 2700019R2A email@example.com Tags:  data-protection tivoli unified-recovery-manageme... deduplication compliance archive business-continuity disaster-recovery data-reduction risk-management restore recovery service-management storage-blog retention backup 1,893 Visits
Unified Recovery Management #3: Recovery Considerations
Welcome back! In chapter 2, I probably scared you senseless with the incredible complexity that storage and backup administrators face in trying to manage data across a wide array of infrastructure and application types, adapting tools and processes to react to a wide array of things that can go wrong, all to ensure that the impacts on users and business operations are minimized.
In this chapter, I’ll attempt to put a little structure around how to cost-effectively address this daunting challenge. It’s all about policies that balance the needs of the business against the resources you have – money, people, infrastructure (or more simply, money!).
If you try to take a ‘one-size-fits-all’ approach to data protection and recovery management, you are either going to spend way too much money (putting the solvency of your organization at risk), or you are not going to meet the needs of the most critical business applications (putting competitiveness and long-term viability at risk).
So the answer is to apply the right technologies and policies to each application need. And yes, this will add another layer of complexity to the environment, but there isn’t much choice.
This diagram lists just some of the things you should consider when creating a recovery plan for each type of data, in each location, for each of the things that can reasonably go wrong.
The first one Recovery Point Objectives (RPO). This measures how much data you’re willing to risk, in terms of the time between backup operations. If you’re backing up a system once each night, you have an RPO of 24 hours, and all of the data created and changed in the 24 hours after the last backup is at risk. That’s obviously not good enough for many applications in many industries, but it is good enough for others.
The second consideration is Recovery Time Objective (RTO). This measures the amount of time it takes to recover from an event. Depending on the type and location of the event, RTO can include the time to determine what happened, deploy any needed hardware and other infrastructure, copy the needed data from the backup repository, recreate any lost data if possible (see RPO above), and reconnect your users and other systems. The longer the RTO, the longer the applicable systems may be down, so planning for a short RTO for the more critical applications is appropriate.
Next, you’ll probably need to consider the costs of the solution in terms of acquisition costs for the solution, plus labor, bandwidth, on-going services, etc. The key to a successful recovery plan is to balance these costs against the needs of the business – ensuring that you are delivering the appropriate levels of RPO and RTO at the lowest possible costs.
The last consideration is probably obvious to everyone, but you’re not going to want to deploy any recovery solution that negatively impacts business operations. For example, applying an aggressive RPO (frequent backups) to a critical application isn’t going to work if the recovery solution requires that you stop and close the application to perform the backup. The cure is not allowed to kill the patient.
So, what can you do? There are lots of choices and point solutions – from many vendors - to address each of the permutations that your plan may have, and I’ll cover many of them in my next blog. Then I’ll start looking at ways to tie all those technologies together to create a truly Unified Recovery Management platform.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  unified-recovery-manageme... data-reduction archive backup storage-blog deduplication data-protection recovery restore 1,339 Visits
Unified Recovery Management #4 – Technology Choices
In my last entry, I explored some of the considerations that you should include in an overall data protection and recovery strategy, including matching the value of the data being protected to service level expectations such as Recovery Point Objectives (RPO), Recovery Time Objectives (RTO) and overall costs.
Today I’ll cover some of the many technology choices that are available to help you meet your objectives. As in previous installments in this blog, this adds another layer of complexity to the program – which technology do you use to meet which need? And at the end of the day how many different tools can you really manage effectively to meet the complex challenges of protecting your data?
Tape-based backup is probably still the most widely-used backup method in corporate and government environments. The challenges with tape have been well documented – lots of manual processes that can lead to errors and recovery problems; poor RPO and RTO performance; and the physical movement of tape cartridges that can create data security risks. For these reasons, many organizations have moved to a blend of disk and tape for backup, enabling faster and more frequent backups, and faster restores from disk, while moving backup data to tape in the background for longer-term retention.
Mirroring and replication are good technologies for system-level recovery and fail-over. However they can leave you with a big hole in your recovery strategy – the loss or corruption of individual files - since any loss will be immediately replicated to the backup system, leaving you with 2 bad copies of your data.
Continuous Data Protection, or CDP, takes the benefits of replication and adds in point-in-time recovery options. The problem with CDP is cost – it requires far more storage capacity than other solutions, and can strain network bandwidth as well.
All three of the above technologies are also susceptible to being unable to recover any files that are open at the time of the data loss incident, such as a system crash – although there are utilities available to help mitigate this issue.
Snapshots fix the open file issue by creating application-consistent time-based recovery points. It is necessary to pause or “quiesce” the application for a very short period of time to accomplish a snapshot, but it’s far faster than a tape backup because it only takes incremental changes over a much shorter period of time. Snapshots can be run very frequently, often many times per hour, to meet aggressive Recovery Point Objectives (RPO). Hardware-based snapshot technologies are not always ‘application-aware’, so the consistency (ability to fully recover) of open files can be a problem.
Disaster Recovery and Business Continuity (DR/BC) services are key focus for many organizations, especially given the growing number of natural and man-made threats to normal operations. Some companies handle it themselves, others contract it out. Either way, you’ll need to balance overall costs against benefits, matching the needs of individual applications and locations to the service levels provided.
Data deduplication is a much hyped technology that, depending on where it is applied, can reduce the amount of data that needs to be backed up and sent over the network, or reduce the amount of backup capacity required, or both. Most of the gains claimed by deduplication are in environments that perform weekly full backups that cause an enormous amount of duplication. Check out my blog series on data reduction to learn more about this important topic.
Virtual Tape is a relatively new entrant to the market, combining the best efficiencies of disk and tape, and adding in data deduplication to help meet cost per capacity goals. As a backend repository, a virtual tape library (VTL) does not replace data capture technologies such as backup, replication or archive, but can be an effective complement to them.
I added Reporting to the diagram above, only because you’ll want to have visibility into the functionality and performance of your data protection and recovery environment n order to provide the assurance that your strategy is effective and meeting the needs of the business.
In my next blog, I’ll start to show how IBM can provide encouraging answers to these questions.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-software tsm ibmstorage tivoli silverstring storage business-partner tivoli-storage-manager bp partners 3 Comments 2,538 Visits
During Pulse 2010 in Las Vegas, I interviewed Alistair Mackenzie from Silverstring, an IBM Business Partner. Just last week Silverstring launched TSMagic; helping you understand your TSM estate like never before... See the news article for more information on TSMagic
Checkout the live video interview with Alistair:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-software storage-blog ibmstorage midrange tivoli-storage storage-management ibmsoftware tpc disk 2,282 Visits
IBM Tivoli Storage Productivity Center (TPC) for Disk Midrange Edition V4.1 is now Available! Announced last month, TPC for Disk Midrange Edition has been designed to help reduce the complexity of managing midrange SAN environments that include IBM System Storage DS3000, DS4000, DS5000, SAN Volume Controller (SVC) Entry Edition and IBM Virtual Disk System devices by allowing administrators to configure, manage, and monitor performance of their entire midrange storage infrastructure from a single console. This new offering provides customers the equivalent features and functions of Tivoli Storage Productivity Center for Disk enterprise offering at a fraction of the cost... up to 80% off list price.
TPC for Disk Midrange Edition is part of the IBM Tivoli Storage Productivity Center V4.1 suite of integrated storage infrastructure management products that are designed to help you manage almost every point of your storage network, between the hosts through the fabric and to the physical disks in a multi-site enterprise. It can help simplify and automate the management of storage data and the networks to which they connect.
Utilizing a new Storage Management Initiative - Specification (SMI-S) Common Information Model (CIM) agent, Tivoli Storage Productivity Center for Disk Midrange Edition can provide over 40 difference reports and performance metrics including:
Administrators can monitor and analyze performance statistics for these storage systems down to five minute intervals. The performance data can be viewed in real time in the topology viewer, stored for historical reporting, or used to generate timely alerts by monitoring thresholds for various device parameters.
Tivoli Storage Productivity Center for Disk Midrange Edition is set apart from IBM Tivoli Storage Productivity Center for Disk because it is:
To learn more, visit the TPC for Disk Midrange Edition Web page and for more information on the IBM Tivoli Storage Productivity Center suite of products, read the data sheet
Delbert Hoobler 1000008PR6 email@example.com Tags:  storage tsm tivoli storage-blog snapshot storage-management storage-software flashcopy 2,170 Visits
In December of last year, I blogged about IBM Tivoli Storage FlashCopy Manager for Windows version 2.1. I talked about how FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows supports Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS) and how it integrates into your enterprise whether you have Tivoli Storage Manager or not. So, if you haven't read my previous blog about FlashCopy Manager on Windows, why not check that out first, then come back to learn more about the new features we just announced!
This Friday, June 4, 2010, IBM will release IBM Tivoli Storage FlashCopy Manager Version 2.2. Some of the exciting new Windows features in this release include:
Did you know? FlashCopy Manager also supports UNIX platforms! Some of the exciting new UNIX features included in FlashCopy Manager Version 2.2 are:
For more details, read the IBM Tivoli Storage FlashCopy Manager Version 2.2 announcement.
General information about IBM Tivoli Storage FlashCopy Manager is located here.
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  tivoli storage-blog storage storage-software storage-management tsm flashcopy snapshot exchange 2 Comments 3,901 Visits
IBM just announced that Tivoli Storage Manager for Mail - Data Protection for Exchange 6.1.2 and IBM Tivoli Storage FlashCopy Manager 2.2 now support Microsoft Exchange Server 2010! For more details, read the FlashCopy Manager Version 2.2 announcement or see my blog from yesterday.
There are a few important things to take note of. Microsoft Exchange Server 2010 included some significant changes, a number of which affect backup and restore. For example, under Exchange Server 2010:
With the release of Data Protection for Exchange version 6.1.2 and IBM Tivoli Storage FlashCopy Manager version 2.2 on June 4, 2010, we have implemented support for these changes. Here are details about the TSM functionality for Exchange Server 2010 that will be available on June 4, 2010:
Note: VSS backups to the TSM Server are enabled without the requirement for a TSM for Copy Services or FlashCopy Manager license.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmstorage storage-blog tivoli data-availability data-protection storage ibm healthcare virtualization 1,698 Visits
Working with IBM, a hospital in Asia Pacific gained a data protection solution that meets users' data availability requirements,
scales on demand to support a growing warehouse of patient data and medical images, and simplifies data migration and
data recovery tasks.
The benefits of the solution include a 50% reduction in backup window; restores individual Microsoft Exchange objects in minutes;
restores systems in under 10 minutes.
Read the complete case study to see how this Asian Pacific hospital gained peace of mind with virtualixed data protection from IBM.
More success stories of other customer implementations of IBM technologies can be found here
Delbert Hoobler 1000008PR6 email@example.com Tags:  exchange storage-software flashcopy tivoli storage-blog storage tsm snapshot storage-management 9 Comments 11,153 Visits
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database.
DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
IBM Tivoli Storage Manager for Mail : Data Protection for Exchange and IBM Tivoli Storage FlashCopy Manager completely support backing up DAG passive database copies. Data Protection for Exchange and FlashCopy Manager also support using those backups to recover the production database as well as for recovering individual mailboxes and items. You can find more details in the IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange Server Installation and User's Guide V6.1.2.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  asset automation user usergroups usergroup storage tivoli community maximo group tuc security tivuser ug 2 Comments 1,666 Visits
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  deduplication data-protection data-reduction business-continuity unified-recovery-manageme... storage-blog disaster-recovery service-management risk-management archive compliance retention recovery restore tivoli backup 2,392 Visits
Chapter 5: Unified Recovery Management – How IBM can Help
In my earlier postings on the topic of Unified Recovery Management (sorry for being away for so long), I laid out in excruciating
detail the complexity that is facing today’s backup administrators: many different applications, on different hardware/OS platforms,
in different locations, with different recovery point and recovery time objectives (RPO & RTO) to meet the operational requirements
of the organization.
In the last entry, I covered some of the many technologies that are available and widely used to address different aspects of this
complex issue. At the heart of the problem is, can any one backup administrator really have true visibility and control of the
entire data protection and recovery process when there are so many solutions and interfaces used.
IBM Software has been working for several years to address this challenge by bringing our various data protection software products
under the control of a single management interface, which is also common with many other IBM Tivoli software products.
The goals of this development initiative are: to manage the entire data protection and recovery infrastructure from a single
administrative interface; to unify the management of data within an integrated portfolio: and to understand where all the recovery
points are, manage them efficiently, and provide the interfaces to recover whatever data is needed, where ever it resides.
This interface is called the Tivoli Integrated Portal (TIP), and from it you can launch, monitor and manage the various Tivoli
Storage data protection and recovery software products:
simplifying management tasks.
A unified Recovery Management approach, such as offered by IBM, can:
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Vince Padua 0600000RVG email@example.com Tags:  roi tivoli bakup data-deduplication tsm recovery storage-software storage-blog ibmstorage data-protection ibm 3 Comments 6,012 Visits
At the recent Gartner IOM 2010 conference in Orlando, Florida, I had the good fortune of listening to a series of interesting topics and meeting some really smart people. As one might have guessed, the bulk of the sessions focused on virtualization and cloud topics. But the one topic that piqued my interested was unrelated to virtualization and cloud - it was deduplication and was hosted by Dave Russell.
The intent of the session was to bring forward a some customer examples that were deploying deduplication technologies in their backup and recovery solutions. Most of you that read this blog know that deduplication and data reduction have been a hot topic in the industry. And as you likely know, almost every major vendor out there offers some form of deduplication with its associated benefits.
This session provided us two customers who were willing to talk about their experiences with deduplication and the benefits they've received. One customer is using CommVault and the other is using IBM Tivoli Storage Manager v6 (TSM). While both customers showcased the quantified benefits from deduplication, the presentation from the TSM customer went beyond just the benefits of deduplication. The TSM customer revealed their quantified benefits and also identified some of the best practices they developed regarding deduplication.
This particular TSM customer is a large producer of natural gas in the U.S. The customers environment has TSM managing about 1.3 petabytes of data from over 1500+ nodes. Overall, their approach to managing backup storage is do it as efficiently as possible and to reduce the overall amount of backup data under management.
Prior to leveraging TSM deduplication, this customer began with "incremental forever" and compression. Once TSM v6 was released, they adopted deduplication at the server and client in concert with the other data reduction features provided by TSM.
As they began evaluating their use of deduplication, they had to deal with demands from their internal customers - DBA and Exchange admins like full backups etc. Furthermore, they had to consider how their rate of data change, evaluate retention policies, and ensure that their restore requirements weren't negatively impacted by the use of deduplication.
After significant testing and planning, the customer decided that they would initially deploy deduplication for their Oracle databases and Windows OS and system state backups. The results of using TSM deduplication were impressive ...
Oracle deduplication results - 75% reduction of Oracle backup data after deduplication. This was on 3.8TB of physical space on disk and about 15 TBs of data on tape.
And their results on Windows OS and System State were a whopping 94% ... taking them from 172GB of managed data down to 11.4 GB. In this scenario, the customer leveraged TSM 6.2 client- or source-side deduplication.
Overall, very impressive results. By leveraging the data reduction features within TSM, the customer was able to save by using less tapes library cells, tape drives, and disks.
In the end, the customer stated that TSM data reduction (with deduplication) helped them meet their objectives - efficiently reduce data under management. Furthermore, it allows them to reduce their overall HW costs and meet or improve restore requirements. The last comment the customer made before closing the session was that with all the various TSM data reduction capabilities in production, their job had ultimately gotten simpler now that their environment was running more efficiently ...
This is a fantastic story that I really enjoy sharing. If you are a TSM customer and have benefited from its data reduction technologies, then please give me a shout as I would like to hear your story as well.
Re: Silverstring Launches Predatar 6 for TSM to Deliver Smarter Enterprise Data Protection, Near Perfect Backup Success
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli smarter storage university planet software manager predatar silverstring aberdeen ibm partner alistair mackenzie brian robertson business 1,379 Visits
Siemens AG Austria - optimized system performance through parallel data backup solution using IBM Tivoli Storage Manager
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data data-availability storage-blog data-backup backup ibmstorage 1,982 Visits
Siemens AG Austria employs about 7,700 staff. Its business activities focus on the three sectors of industry, energy and healthcare as
well as on IT solutions and services.
Siemens needed a secure solution that would enable them to record all the data collected in the control center without any gaps, archive it
for a period of five years and to store it for possible later analysis.
Siemens worked with IBM to create an optimized system performance solution utilizing a parallel data backup system.
IBM Tivoli Storage Management (TSM) supports the parallel backup of databases and thereby enables significant savings in time. The
parallel backup of the databases avoids the in-completion of backups due to the unavailability of individual components. The system solution
offers a high level of quality with optimal performance and is characterized by a high degree of reliability and availability.
Read the complete case study for more information on how Siemens AG Austria optimized system performance through parallel data backup.
More success stories of other customer implementations of IBM technologies can be found here.
Vince Padua 0600000RVG firstname.lastname@example.org Tags:  storage-software ibm data-deduplication recovery ibmstorage earnings data-reduction tsm data-protection bakup storage-blog tivoli 2,140 Visits
IBM posted Q2 results yesterday showing strong performance by the Tivoli brand. Here is an excerpt from the prepared remarks:
Of particular interest for this blog is the continued strength of the storage software portfolio:
Congrats to the team for their continued success ... looking forward to 2H 2010!
Vince Padua 0600000RVG email@example.com Tags:  storage-software tivoli fastback bakup data-deduplication tsm ibmstorage acceleration data-protection storage-blog wan juniper optimization ibm recovery 3,043 Visits
Juniper Networks recently published a solution brief regarding the performance boost you get from using TSM Fastback in concert with their WAN optimization (WXC). The value proposition is pretty straightforward: reduced backup times and reduced WAN bandwidth and cost. You can read the full details in the report, but here are a few snippets worth noting:
Conceptual view of the bandwidth savings ...
Savings of backing up 92GB over a 155Mbps link with 100 ms latency:
These savings are above and beyond those you already get with using TSM Fastback (taken from solution brief):
TSM Fastback is a solution that has seen strong adoption from customers with remote offices ... If backup times or bandwidth usage over a WAN are a concern, I suggest you look into the WXC offering from Juniper Networks in concert with TSM Fastback.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  risk-management unified-recovery-manageme... deduplication data-protection backup data-reduction retention disaster-recovery archive compliance recovery business-continuity restore storage-blog service-management tivoli 1,887 Visits
Announcing IBM Tivoli Storage Manager FastBack v6.1.1
IBM Tivoli Storage Manager FastBack is an advanced continuous data protection and near-instant recovery software solution for business-critical Windows and Linux servers in the data center, remote offices and small- to mid-sized enterprises. Customers use Tivoli Storage Manager FastBack to help reduce the amount of data at risk between backups to almost zero, and to reduce the time to recover from almost any data loss to just seconds. TSM FastBack also includes built-in target-side data deduplication; all of this adds up to reducing the costs of storage, bandwidth and administration. Optional add-ons include Bare Machine Recovery to quickly restore the operating system volume to similar, dissimilar and virtual hardware; and granular recovery of individual e-mail objects from Microsoft Exchange.
On July 30, 2010, IBM released Tivoli Storage Manager FastBack v6.1.1 which includes data deduplication across FastBack Servers and locations when consolidating remote office backup to a central Tivoli Storage Manager Server. These enhancements further improve backup and disaster recovery performance, cut costs, and expand on FastBack integration with Tivoli Storage Manager to provide a true Unified Recovery Management platform.
Also in this new release are: support for Microsoft Exchange 2010, including granular e-mail object recovery in TSM FastBack for Microsoft Exchange; doubling the amount of data that a FastBack server can protect; extending near-instant restore capability to Linux volumes; and many other performance and ease-of-use improvements.
For more information, please visit http://www.ibm.com/software/tivoli/products/storage/storage-mgr-fastback/
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  xiv ibmstorage data-protection storage-blog restore backup tsm case-study data-backup 1,696 Visits
VINCI PLC, based in Watford, UK, is the largest British arm of VINCI, the world’s leading concession and construction group. The company operates in the sectors of building,
civil engineering, air, facilities and technology. VINCI PLC has in the region up to 4,000 employees and its annual turnover exceeds £1 billion.
To consolidate several acquisitions and implement a new ERP system, VINCI PLC needed to extend its storage infrastructure and sought a reliable, flexible, easy-to-manage platform for handling rapid growth.
Read the complete
success stories of other customer implementations of IBM technologies
can be found
Delbert Hoobler 1000008PR6 email@example.com Tags:  flashcopy vss ibm tivoli tivoli-storage-manager tivoli-storage flashcopy-manager 1,803 Visits
I have been writing about IBM Tivoli Storage FlashCopy Manager on Windows and some of the new functions that we released earlier this year like Exchange Server 2010 support and SQL Server 2008 R2 support. We are working on some more exciting enhancements and I want to tell you about an early access program for the next release of FlashCopy Manager. If you are interested in looking at and testing some of the new functions and features of the next release of IBM Tivoli Storage FlashCopy Manager, please contact your IBM Tivoli Sales Representative to get more information.
This is a nice opportunity to see what is coming in the next release of FlashCopy Manager and test it in your own environment. Act now!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  maximoworld celebrity it 2011 maximo call speakers servicemgmt business mgm ibm partner spa best practices tivoli for pulse eam 1,316 Visits
In response to: Pulse 2011 Call for Speakers!It's that time of the year.... Call for Speakers is open for Pulse 2011... submit your storage stories today
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-software tsm virtual-storage data-reduction data ibmsoftware ibm tivoli-storage-manager ibmstorage backup 1,505 Visits
Here is the URL for this bookmark: http://www.tradingmarkets.com/news/stock-alert/ibm_ibm-and-pancetera-software-support-cal-ema-s-state-wide-emergency-services-1143979.html
Sondra Ashmore 060000GRCD SASHMORE@US.IBM.COM Tags:  center xiv storage-blog tpc management sspc system productivity storage tivoli-storage-productivi... srm 3 Comments 2,759 Visits
I have been working in storage and storage management my entire career (which has been more years than I want to admit) and I was recently advised by a wise co-worker to start writing about it. Although blogging has been around for quite some time and has certainly increased in popularity in recent years, this is the first time I have braved this form of communication. I stared at a blank blinking cursor for inspiration and decided to write about one of my favorite storage products, the Tivoli Storage Productivity Center.
Several weeks ago IBM announced the new 4.2 release of Tivoli Storage Productivity Center. This release includes some interesting enhancements that I am excited to see in the product. One feature that has received a lot of buzz is the lightweight storage resource agents. TPC started down the path of lighter agents when they introduced a slimmer, but not completely lightweight version of the agents by moving from Java to C for enhanced performance. These new agents were limited to Windows, AIX, and Linux. The new 4.2 release added HP-UX and Solaris support as well as support for file and database-level management. The new release is backward compatible meaning that customers who want to continue using agents they set up previously can do so. New customers are no longer required to use the Common Agent Manager.
TPC 4.2 has introduced full support for XIV devices. TPC 4.1 did have toleration support for XIV (basic discovery and capacity information), but the new release you can provision, get performance information, and use the data path explorer for your XIV machine.
If you have TPC deployed on a System Storage Productivity Center (SSPC) can upgrade at any time. Customers buying a new SSPC machine after September 3, 2010 will automatically have TPC 4.2 pre-installed on the machine.
I could say a lot more about the new TPC 4.2 release, but instead I am going to point you to a wonderful blog entry that my colleague, Tony Pearson, wrote when the new release was announced. He provides some great insights about the new features in TPC 4.2.
Wow - I made it to the end of my first blog and I am beginning to understand why blogging has become so popular. I am starting to wonder why it took me so long to write my first blog?
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  maximo celebrity eam 2011 tivoli speakers partner mgm ibm for maximoworld practices spa pulse servicemgmt business best call it 1,433 Visits
In response to: Pulse 2011 Call for Speakers!Call for Speakers for Pulse 2011 has been delayed to Sept. 22, 2010
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data-reduction ibmstorage data-backup pulse2011 backup-recovery storage-management ibmpulse data-availability storage-blog storage-software pulse 2,066 Visits
Pulse 2011 Call for Speakers Opens Wednesday, September 22!
Boy oh boy, time sure flies when you're having fun. It seems like I was just at Pulse 2010 in Las Vegas, being a roving reporter, capturing customer, business partner and Subject Matter Expert Videos. It's actually been about nine months and once again it's time to ramp up for Pulse 2011.
Pulse will return to the MGM Grand in Las Vegas February 27 through March 2, 2011. Just like Pulse 2010, we're looking for client speakers to share their success stories and speak in the different track sessions. Do you have a storage success story? What are you doing to make your organization smarter when it comes to storing and backing up your data? How do you gain visibility across your infrastructure, including your storage environment? Are you in control of your data, no matter where it resides? How have you leveraged automation technologies to manage the explosion of data, and the need for instant accessibility? We want to hear from you! What software, hardware and services are you utilizing to deliver better services within your organization, to your internal and external customers? Come share your story of how you're using IBM Storage as a part of your organization's Integrated Service Management implementation.
At Pulse 2010, there were over 300 client speakers and if you weren't a speaker then, you should definitely submit your proposal for Pulse 2011. Check out the benefits of being a client speaker!
Client Speaker Benefits:
Pulse 2011 client speakers will receive complimentary registration to the conference and the first 50 to submit a proposal will receive a FREE hotel accommodations upgrade* to a Celebrity Spa Suite at the MGM Grand if the proposal is accepted!
*The speaker pays for the basic room and will be awarded the upgrade if they submit one of the first 50 papers to be accepted.
Client Speaker Benefits include:
Read Jennifer Dennis' blog Pulse 2011 Call for Speakers - Opens 9/22 @ibm.com/pulse! for details on submitting your proposal. Don't delay, get prepared to submit your proposal right away, those 50 upgrades will be going fast!!!
Here are some customer speaker interviews I did during Pulse 2010, hopefully this will give you an idea of what you can submit for your proposal.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data-backup storage-management ibmpulse pulse2011 ibmstorage backup-recovery storage-software data-availability pulse data-reduction storage-blog 1,157 Visits
In response to: Pulse 2011 Call for Speakers - Share Your Storage Success StoryCall for Speakers has been delayed until Friday 24th of September 2010
Vince Padua 0600000RVG firstname.lastname@example.org Tags:  ibmstorage uk data-protection acceleration tsm backup recovery storage-software wan hsm storage-blog ibm tivoli kentucky supercomputer 1 Comment 2,654 Visits
A new supercomputer at the University of Kentucky has placed it in the top 10 of public universities for compute power.
According to UK President Lee T. Todd Jr, "This supercomputer will allow our world-class researchers to discover new solutions to the complex problems facing the Commonwealth, the nation, and the world."
This new high-performance compute cluster comes with 200 terabytes of usable disk storage. This important data is protected by Tivoli Storage Manager (TSM) and Heirarchical Storage Manager (HSM) that is connected to the UK central backup system.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  case-study storage customer-reference success-story niu ibmstorage tsm xiv data-storage storage-blog 1,220 Visits
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog storage-management ibmstorage pulse2011 pulse ibmpulse 1,621 Visits
Pulse 2011 Call for Speakers is Now Open!!!! Submit Your IBM Storage Success Story and Register for the Event TODAY!
We made it at last! Pulse 2011 Call for Speakers is now officially open. Read my blog from earlier this week to get an idea of what kind of storage success you can submit for your proposal and also see the benefits you, as a customer, can receive if your proposal is selected.
Don't delay... Call for Speakers will end on Nov. 2 and remember, the first 50 customer proposals selected will get a free room upgrade.
Submit your proposal now!
Richard Vining 2700019R2A email@example.com Tags:  archive risk-management compliance tivoli-storage-manager recovery tivoli-storage-manager-fa... data-reduction storage-blog data-protection deduplication backup disaster-recovery restore unified-recovery-manageme... retention business-continuity service-management 2 Comments 3,227 Visits
IBM Tivoli® Storage Manager FastBack v6.1.1
Advanced Recovery Capabilities for TSM Environments
IMPROVE RECOVERY POINT AND RECOVERY TIME OBJECTIVES
Every IT environment contains applications that are critical to the operations and resilience of the organization. Real pain is experienced when these applications go off-line, or when the data that they rely on is lost or corrupted. Traditional once-per-day backup may be leaving too much new data at risk of loss, and current volume-level recovery processes only extend downtime as all the data is restored following a system failure or other disaster.
IBM Tivoli Storage Manager FastBack performs non-disruptive block-level backups for critical Microsoft® Windows and Linux® applications, in the background with no backup window, as often as needed to assure data protection that meets business requirements. And FastBack provides near-instant recovery of entire volumes, enabling immediate and transparent access to protected data even while it is restored in the background.
TIGHT INTEGRATION WITH TIVOLI STORAGE MANAGER
With the release of Version 6.1.1, TSM FastBack can now be completely integrated into your Tivoli Storage Manager environment. From the TSM Administration Console, FastBack can be launched and managed; you can set FastBack backup policies and initiate restore operations; you view FastBack reports; and you can backup the FastBack repository into the TSM Server for long-term data management and retention.
TSM FastBack’s automation and ease-of-use make it an excellent choice for protecting and restoring Windows and Linux server data in remote and branch offices. With the new data deduplication capabilities included standard in FastBack v6.1.1, remote office data can now be efficiently and cost-effectively replicated over WAN to a central TSM Server. Fast and easy recovery can be accomplished in the remote office from the local FastBack Server, or in the central site from the TSM Server in disaster recovery situations. Your distributed data can now meet the same levels of protection and retention provided for your core applications through Tivoli Storage Manager.
Tivoli Storage Manager FastBack is an integral member of the TSM Family, providing advanced data protection and recovery capabilities across the entire organization from a single administrator interface.
To learn more, please visit: www.ibm.com/software/tivoli/products/storage/storage-mgr-fastback/
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Re: IBM Smarter Systems Announcement Webcast: Taming the Information Explosion with IBM System Storage
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage infrastructure information webcast ibmsoftware storage-event storage-blog 1 Comment 1,459 Visits
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm webcast ibmsystems midsize ibmstorage ibmsoftware smb 1,308 Visits
In response to: Webcast: Midsize Business: Revolutionary Data Storage SolutionsI can't wait either!
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  deduplication data-protection disaster-recovery archive data-reduction storwize storage-blog service-management unified-recovery-manageme... restore risk-management business-continuity replication recovery backup tivoli 5 Comments 3,927 Visits
October 7, 2010
IBM today announced the upcoming availability of the IBM Storwize V7000, a groundbreaking new midrange storage system. This new solution brings enterprise-class functionality, scalability and management to the midmarket at an attractive price point. All of the storage built into the Storwize V7000 is virtualized, and it can also be extended to virtualize other storage systems in your environment, to leverage your investment in them while simplifying storage management and improving utilization. Cha-ching!
See the IBM Storwize V7000 announcement here
As customers begin to evaluate and deploy the Storwize V7000, they will naturally look at options for protecting the business critical data that they will be storing on it. The system has built-in FlashCopy® snapshot software, and is available with metro and global mirroring software for high availability and disaster recovery.
These replication solutions are priced aggressively, however they rely on expensive fibre-based networks to transfer data. These network connections can be quite expensive, especially for midsized businesses, and IBM has another, more cost-effective solution available for off-site disaster recovery.
IBM Tivoli Storage Manager FastBack will selectively replicate Windows and Linux application data from your IBM Storwize V7000 to another location, anywhere in the world, over IP-based (WAN, Internet, Intranet) networks. FastBack’s block-level incremental-forever data capture, with built-in data deduplication, is highly network bandwidth efficient. And since it performs its data acquisition in the background, as often as needed to support tight Recovery Point Objectives (RPO), there is no backup window to concern yourself with.
In addition to using FastBack for cost-effective, IP-based disaster recovery, you can leverage FastBack’s local, near-instant recovery capabilities to restore files, databases, or even entire volumes following almost any type data loss.
For more information of Tivoli Storage Manager FastBack, please visit:
To download a free 60-day trial version of TSM FastBack, register here.
Storwize is a trademark of Storwize Inc., an IBM company, and used under license by IBM.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Kelly Beavers 0600015T0P email@example.com Tags:  restore unified-recovery-manageme... archive disaster-recovery business-continuity storage-blog risk-management data-reduction storwize replication service-management data-protection backup deduplication recovery tivoli 1,459 Visits
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  sql tsm vss exchange flashcopy-manager 1,592 Visits
IBM Tivoli Storage Development is currently running a beta program for a new release of FlashCopy Manager.
We are looking for additional participants for this program which could be new or existing FlashCopy Manager users, as well as Data Protection for Exchange or Data Protection for SQL users as those products are incorporated into FlashCopy Manager. IBM is very interested in obtaining valuable customer and business partner input on this release prior to General Availability.
We want you to participate! Why not take advantage of this opportunity to help shape these products while at the same time helping to ensure that your environments are understood and your requirements are met? By participating you'll have the ear of development and will be able to participate in weekly discussions with development. This is a win-win for everyone.
If you are interested in participating in this beta program please contact Mary Anne Filosa (email@example.com).
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  archive retention restore data-protection business-continuity storage-blog fastback backup risk-management tivoli data-reduction service-management compliance disaster-recovery recovery unified-recovery-manageme... deduplication 2,165 Visits
I recently saw an interesting cry for help on the Internet. A Tivoli Storage Manager (TSM) customer had a situation where his company was creating millions of new files every day, and the process of scanning the file system to look for incremental files to back up was becoming a real issue.
Tier 1 analyst report places IBM in the ‘Leader’ quadrant in the evaluation of the storage resource management software market
Amalore Jude 270003DGKQ email@example.com Tags:  storage-management tpc san-management srm gartner tivoli-storage-productivi... storage-blog magic-quadrant 1 Comment 2,183 Visits
In the 2010 Magic Quadrant for Storage Resource Management and SAN Management Software, Gartner positions IBM in the ‘Leader’ quadrant.
The report reflects the enhancements IBM has delivered in the recently announced Tivoli Storage Productivity Center version 4.2. The new version comes packed with enhancements that include superior management control, expanded storage device support, and automated end-to-end storage provisioning.
Tivoli Storage Productivity Center offers open, standards-based, scalable management capability for heterogeneous physical and virtual storage environments, enabling you to achieve the three tenets of effective storage management. Simplify storage deployment and management by reducing implementation time and operational complexities. Optimize storage performance and utilization to significantly improve uptime of mission-critical applications. Centralize end-to-end storage management of IBM, and non-IBM systems that comply with SMI-S standards.
Gartner’s 'Leader' rating of IBM’s TPC validates…
Tivoli Storage Productivity Center suite includes products to specifically manage Data, Disk and Replication. For more information on TPC, visit ibm.com. To download the 2010 Gartner report, click here.
Note: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli ibm customer-reference service-management storage storage-management storage-blog 1,403 Visits
The Central Depository Company of Pakistan Limited (CDC) is the only depository in Pakistan, handling the electronic settlement of transactions carried out at the country's three stock exchanges.
With numerous point management tools, time-consuming manual processes and no single help desk, IT administrators were constantly operating in a reactive mode and faced just 90 percent system availability.
IBM Business Partner Gulf Business Machines helped CDC implement an Integrated Service Management solution from IBM that increases IT efficiency while improving the effectiveness of business services.
90 percent reduction in average time for root cause analysis; estimated 50 percent reduction in time to support new lines of business; 98 percent improvement in service level agreement (SLA) levels.
"IBM Tivoli Storage Productivity Center gave us greater visibility into storage utilization, helping us optimize capacity planning and improve our storage ROI to save 30%"
—Syed Asif Shah, Chief Information Officer, Central Depository Company of Pakistan Limited
Read the complete case study for more details on the solutions used for CDC to implement and Integrated Service Management solution.
More success stories of other customer implementations of IBM technologies can be found here.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli-storage-manager data-recovery tsm data-backup storage-software ibmstorage storage-blog backup-recovery 1 Comment 2,095 Visits
New Video: Tivoli Storage Manager runs a smarter Data Center
Ohio Health has eight member hospitals, nine affiliate hospitals and numerous out-patient facilities throughout Ohio. Their systems run on pSeries hardware for many of the clinical systems with an AIX operating system. They have, between two primary and secondary sites, where they run systems at either site. Backups are critical in their clinical environment because it affects patient care. They use Tivoli Storage Manager for their backup environment. Tivoli Storage Manager writes directly to the primary site and it gets replicated to the second data center. Using a disk-based backup method, they have shaved seven hours off admin processing time because they don't have to write off-site copies.
Watch the video and hear why Ohio Health loves using Tivoli Storage Manager
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  backup-recovery virtualization storage-blog data-protection data-reduction healthcare storage-software data-availability ibmpulse 2 Comments 3,018 Visits
VCU Medical Center is one of the leading academic medical centers in the United States and the only academic medical center in central Virginia, offering state-of-the art care in more than 200 specialty areas along with Level 1 trauma care.
For VCU Health System, technology provides the foundation for transforming clinical services and delivering patient care. However, with a heterogeneous storage infrastructure and no single user interface, the team’s three storage engineers faced significant hurdles in managing growing data volumes and recovering data quickly when needed.
Working with IBM, the health system implemented a virtualized, scalable and high-performance storage infrastructure that improves service levels, reduces costs, mitigates risks and supports an increasing amount of data (growing at more than 20 percent annually).
Reduced data recovery time; Process of migrating data has been shortened with greater success probability; Standardization and consolidation of storage systems has reduced the storage footprint and decreased data center temperatures from 43 to 68 degrees; lower cooling and energy cost; Reduced storage spending
“With Tivoli Storage Manager, we can set multiple recovery point objectives and the XIV allows us to keep multiple snapshots of the data without impacting performance. So we can have copies and copies and copies of the data where we couldn’t before.”
—Greg Johnson, CTO and Director, Technology & Engineering Services, VCU Health System
Read the complete case study for more details on how VCU Medical Center worked with IBM to gain uninterrupted data access.
More success stories of other customer implementations of IBM technologies can be found here.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm pulse-2011 service-management integrated-service-manage... pulse 968 Visits
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data-avaialbility ibmstorage fastback data-backup backup-recovery disaster-recovery storage-management data-recovery storage-blog storage-software tsm-fastback 2,142 Visits
New Video: ManTech International provides the United States Department of State reduce their backup and recovery time using Tivoli Storage Manager FastBack
Peter Stark Executive director of ManTech International which is under contract to the United States Department of State to provide global IT modernization, of all State Department information
systems around the world. The department operates two physically separate networks around the world for classified and unclassified data with up to 3,000 servers spread throughout the world.
By using the Tivoli Storage Manager FastBack Solution they are able to get eight snapshots a day from the Exchange server each of which is only taking two or three minutes to run and can
recover objects in 5 or 10 minutes whereas previously it was not feasible with a 46 hour backup and recovery time.
Amalore Jude 270003DGKQ email@example.com Tags:  ibm-tpc storage-blog ibm-srm storage-resource-manageme... san-management tivoli-storage-productivi... gartner storage-management 3,929 Visits
Tivoli Storage Productivity Center (TPC) now enables enterprise-wide management of IBM Storwize V7000, a new and innovative midrange storage system that is ideal for customers who seek to virtualize and integrate storage systems, both IBM and non-IBM.
In addition to device-level management software that comes packaged with Storwize V7000, TPC offers incremental benefits that are included its latest version 4.2.1. TPC for Disk Midrange Edition (MRE), announced earlier this year, is ideally suited and recommended for use with Storwize V7000.
Simplified deployment and visibility
TPC for DISK MRE supports Storwize V7000 during discovery as a new device type with different types of managed disks. TPC’s quick discovery and configuration capabilities enables you to attach the storage device with ease and helps you to configure efficiently – i.e. plan for replication while provisioning the device. Disk MRE also enables Launch in Context and Single Sign-on, significantly reducing the burden on storage administrators.
TPC’s topology viewer offers a collective view that includes Storwize V7000, helps differentiate between external array-based disk and local disk, and displays tiering information. TPC for Disk MRE also extends thin provisioning support for Storwize V7000, enabling increased utilization and lowered costs.
Storwize V7000 offers unmatched performance and availability among midrange disk systems. Adding TPC for Disk MRE enhances performance monitoring by capturing metrics such as input and output (I/O) and data rates, and cache utilization from a single console. TPC helps establish threshold levels based on business priorities and alerts when these levels are breached. This helps administrators to avoid the ‘knee curves’, while proactively managing performance and service levels by tracking historical information.
Tivoli Storage Productivity Center for Replication enables Storwize V7000 with superior disaster recovery (DR) management providing central control of the replication environment, and helps establish Flash Copy, Metro and Global Mirror relationships.
TPC for Disk MRE offers detailed metrics that include performance data for storage subsystem, controller, cache, I/O, array, disk group and port. These performance statistics can be stored in database tables for later use, so that storage administrators can track and measure service levels.
TPC for Disk MRE also provides information on Easy TierTM, a function that significantly enhances performance with automatic migration of data assets to high-performing solid state drives (SSD).
TPC Standard Edition offers performance metrics-based recommendations for provisioning, including SAN planning (and DR planning with TPC for Replication). TPC’s new ‘Disk Magic’ model helps identify ‘hot spots’, improving storage optimization for Storwize V7000.
Gartner in its 2010 Magic Quadrant for SRM and SAN Management Software rates TPC in the ‘leaders’ quadrant with an enhanced rating, highlighting IBM’s continued investment in product engineering and innovation. In case you have not viewed the report, click here.
Note: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  snapshot tivoli-storage-manager storage-blog flashcopy-manager storage-management flashcopy tsm ibmstorage storage tivoli ibmsoftware storage-software 2 Comments 3,289 Visits
I wanted to let everyone know that IBM Tivoli Storage FlashCopy Manager for Windows Version 2.2.1 was just released!
In June of this year, I blogged about IBM Tivoli Storage FlashCopy Manager version 2.2.0. I talked about how FlashCopy Manager 2.2 provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows 2.2.0 added new support for Microsoft Exchange Server 2010 and Microsoft SQL Server 2008 R2 as well as other enhanced performance and functionality.
We continue to add more functions and features to IBM Tivoli Storage FlashCopy Manager. This past Friday (December 10th, 2010), IBM released IBM Tivoli Storage FlashCopy Manager Version 2.2.1 with the following changes:
Updates Applicable to All Platforms
Updates Applicable to all FlashCopy Manager components that run on AIX, Linux, and Solaris
Updates Applicable to the FlashCopy Manager for Exchange Component
Updates Applicable to the FlashCopy Manager for SQL Component
For more details on the content of this Fix Pack, refer to the technote titled What's new in the Version 2.2.1 IBM Tivoli Storage FlashCopy® Manager Fix Pack.
For details on downloading this Fix Pack, refer to the technote titled Version 2.2.1: Fix Pack IBM Tivoli Storage FlashCopy® Manager.
Richard Vining 2700019R2A email@example.com Tags:  backup archive compliance tivoli retention risk-management unified-recovery-manageme... restore storage-blog recovery disaster-recovery business-continuity data-protection service-management deduplication data-reduction 4,039 Visits
In January, analyst firm Gartner issued its long-awaited Magic Quadrant for Enterprise Disk-Based Backup and Recovery. In it, IBM is placed in the Leaders quadrant. This new report diverges some from previous Magic Quadrant and Market Scope reports on backup software market as Gartner recognizes the transformations occurring in this market to address the significant challenges organizations are facing in adequately protecting their data.
In my opinion, Gartner also recognizes this is a crowded, mature market – after all, everyone has a backup solution. In order to succeed, a vendor must provide added value over traditional solutions that address the growing amount of digital information, the importance of that data to the health of the organization, and the increased service level requirements in an always-on economy.
I believe that Tivoli Storage Manager (TSM) has again been noted as a market leader precisely because IBM has continued to add substantial value to the product over the past few years, including:
But Gartner dinged IBM on a couple of valid points. First, they pointed out our slow adoption of advanced techniques for protecting data in virtual server environments. We do have an excellent non-disruptive solution for in-guest backup of Windows and Linux VMs in TSM FastBack; and we added some support for VMware VCB and vStorage APIs for Data Protection to TSM, but the market demands more. And we will be delivering an advanced solution in the very near future that should put us in the lead in this important market segment. And you’ll be able to see it at PULSE 2011 later this month.
Gartner also noted while IBM customers tend to love TSM and refute the perceptions that it is still difficult to deploy and manage (as had been the case years ago), the rest of the market is basically unaware of the improvements and value that the latest versions of TSM deliver. The problem, if you can call it that, is that IBM has more than 1,000 important products in the market, and trying to advertise them individually would be both impractical and ill-advised.
Instead, IBM directs its marketing muscle to higher order strategies, including “Building a Smarter Planet”, Green IT, data center transformation and optimization, integrated service management, security and governance, and of course, cloud computing. It is important to point out that implementing any of these strategic projects, in any size organization, is going to create and rely on a large and growing amount of data. Providing continuous access to this data is a basic requirement for any successful IT service.
In my opinion, only IBM Tivoli provides the data protection and recovery capabilities that can scale and perform to meet these high order challenges, and meet the needs of present and future IT service delivery. To learn more, please visit the Tivoli Storage Management website.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  tsm tivoli-storage pulse2011 flashcopy-manager tivoli-storage-manager pulse 1,151 Visits
Hi there! Are you going IBM Pulse 2011 in Las Vegas next week? I'm going and I hope you will come join me. I will be presenting Session 1494: Protecting your critical business applications with IBM Tivoli® Storage FlashCopy® Manager on Wednesday, March 2nd at 11:00 am. I will also be in the Pulse Solutions Expo . You can come talk to me and see a demo of FlashCopy Manager on Windows in action. It should be a great week in Vegas. There are a lot of really good education sessions, customer presentations, hands-on labs, BOF sessions, and more. I hope you will stop by and say hello!
Richard Vining 2700019R2A email@example.com Tags:  business-continuity recovery retention unified-recovery-manageme... archive virtualization backup service-management restore vmware data-protection vadp disaster-recovery deduplication data-reduction storage-blog tivoli risk-management compliance 2,320 Visits
On February 22, 2011, IBM announced the newest member of the IBM Tivoli® Storage Manager family of unified recovery management solutions.
While Tivoli Storage Manager already provides a very effective solution for the challenge of protecting VMware systems, the new IBM Tivoli Storage Manager for Virtual Environments offering provides additional improvements for backup efficiencies and advanced recovery capabilities. IBM Tivoli Storage Manager for Virtual Environments eliminates the burden of running backups on a virtual machine by leveraging VMware’s vStorage APIs for Data Protection, which offload backup workloads from the virtual guest machines to a centralized vStorage backup server. The vStorage backup server takes full and incremental snapshots of virtual machines, processes the backups and sends the results to an IBM Tivoli Storage Manager server.
The central vStorage server can itself be installed in a VMware guest, and requires no storage capacity of its own, so this advanced solution can be implemented without the need to additional hardware.
Tivoli Storage Manager for Virtual Environments provides customers with flexible recovery options: restore individual files, disk volumes or entire Virtual Machines from a single-pass backup. Access to full Microsoft® Windows and Linux® disk volumes can be restored in just a few minutes while the data is copied in the background, reducing downtime by hours or days.
Tivoli Storage Manager for Virtual Environments also includes the ability to automatically discover new Virtual Machines as they are brought on-line, and auto-assign backup policies so that all data remains protected.
Tivoli Storage Manager for Virtual Environments integrates with and extends the role of Tivoli Storage Manager in meeting needs for backup and recovery, online database and application protection, disaster recovery, data reduction, bare-machine recovery, space management, and archiving and retrieval. In the virtualized environment, it provides both improved frequency of backups to reduce the amount of data at risk, and faster recovery of data to reduce downtime following a failure.
Tivoli Storage Manager for Virtual Environments will be demonstrated at the IBM PULSE 2011 event in Las Vegas (ibm.com/pulse) starting Feb. 27th. More product information is available at http://www.ibm.com/software/tivoli/products/storage-mgr-ve/. The full announcement can be found at here.
Lauren Whitehouse, Sr. Analyst, Enterprise Strategy Group: http://www.dataprotectionperspectives.com/2011/02/ibm-tsm-for-virtual-environments-closes-gaps/
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  vmware backup service-management data-protection vadp retention risk-management recovery virtualization archive disaster-recovery restore business-continuity data-reduction deduplication storage-blog unified-recovery-manageme... tivoli replication compliance 4,350 Visits
IBM FastBack for Storwize V7000 provides a cost-effective disaster recovery solution over standard IP-based networks (LAN, WAN, Internet, etc.) for Microsoft® Windows and Linux® server application data. It leverages many of the advanced continuous data protection and near-instant recovery capabilities of Tivoli® Storage Manager FastBack, while being priced and packaged for the IBM Storwize V7000 midrange storage system.
Customers use IBM FastBack for Storwize V7000 to help reduce the amount of data at risk between backups to almost zero, and to reduce the time to recover from almost any data loss to just seconds. IBM FastBack for Storwize V7000 also includes built-in target-side data deduplication, and deduplication across locations when consolidating backups to a central Tivoli Storage Manager system. IBM FastBack for Storwize V7000 also includes fast, granular recovery of Microsoft Exchange e-mail objects, including messages, attachments, contacts, calendars, notes, tasks and journals.
Using IBM FastBack for Storwize V7000, administrators can select which volumes to replicate and then schedule when to send the data over the Wide Area Network (WAN) or Internet, thereby enabling IP replication capabilities for the IBM Storwize V7000 and effective disaster recovery capabilities without straining existing bandwidth.
IBM FastBack for Storwize V7000 delivers a cost-effective data protection and disaster recovery offering for small and mid-sized organizations. It combines a number of leading-edge, patented technologies to deliver a data protection and recovery offering for servers and applications by helping to:
o Eliminate the need for traditional backup windows by continuously tracking data changes at the block level, with extremely low overhead on the systems it protects
o Improve recovery service levels and meet stringent data protection and retention requirements through use of a flexible policy engine
IBM FastBack for Storwize V7000 can enable protection, replication, and recovery of data for critical applications, including IBM DB2, Microsoft Exchange, Microsoft SQL, SAP, and Oracle. It delivers the power to quickly help recover any Microsoft Windows, or Linux server data, from anywhere in the organization and any point in time.
IBM FastBack for Storwize V7000 can store all of the data onto the IBM Storwize V7000 and can replicate that data to an offsite location to another FastBack for Storwize V7000 system, or to a Tivoli Storage Manager Server, for disaster recovery purposes. In addition, IBM FastBack for Storwize V7000 V6.1 can be utilized to help protect production data that resides outside of the IBM Storwize V7000.
IBM FastBack for Storwize V7000 will be announced on Feb. 15, 2011 and will be released in March. The product website and data sheet are at:
Maria Huntalas 1200007VFS email@example.com 1,942 Visits
Effective Storage Management is the best way to to make sure you have a contingency plan to protect your data. Be sure to take advantage of the Storage Management sessions during the upcoming Pulse conference in Las Vegas. The sessions at Pulse on Storage Management will include best practices, customer case studies and analyst presentations.
Use this convenient listing to find the locations for these sessions. You can print out this list, or send the URL to your mobile phone for easy reference during the conference.
Monday Feb. 28 MGM Conference Center, Las Vegas
Storage Track Kick Off – Monday 10:45 – 12:15 – Room 312 Title: The Butterfly Effect on Information and Storage Abstract: Data and Information growth is often driven by small technical, business, regulatory, and social changes. For instance, the use of social networks, photo sharing and video sharing websites,search engines, and regulatory changes all started small but are growing into paradigm shifts resulting in significant changes in the underlying IT infrastructure. Given that data and information are directly interconnected with storage, these small changes in the overall environment can cause data and storage growth to outpace storage capacity, storage budgets, and an organization`s ability to stay ahead of these changes a phenomenon known as the Butterfly Effect. In this session, we will highlight some of the changes driving the Butterfly Effect in your storage infrastructure, how to address the changes, and how IBM can help. Speakers: Kelly Beavers (IBM), Ron Riffe (IBM), Rachel Dile – Forrester Research Storage Management Track – Monday PM - Rooms 306 & 307 Sessions in Room 306:
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  #ibmstorage #storage #ibmstg #ibmpulse 1,044 Visits
Kelly Beavers, Director of IBM Tivoli Storage & System z Product Management, kicked off the Storage Track yesterday and it was standing room only! Rachel Dines of Forrester, spoke about the 'Butterfly Effect' and its impact on the entire IT infrastructure and the ultimate downstream effect on Storage. It's imperative that Storage be considered when looking at how to reduce costs and do more with less in the data center. Deduplication strategies, data compression and Archiving were raised as the key areas to focus on. Ron Riffe, Business Strategy lead for Tivoli Storage spoke about how we need to:
The breakouts that followed were all filled to capacity - so much so, that we had to "reswizzle" <technical term> the room layout. This was a GREAT problem to have!! We're now some of the largest rooms here in the Conference Center (Rooms 306/7 and 310) - doubling our capacity. The day ended with a Standing Room Only Happy Hour / Birds of a Feather, hosted by Kelly Beavers and her Product Management Team. All in all, the day was outstanding, and there's so much more to come!