IBM Systems Storage Software Blog
Milan Patel 060001K86W email@example.com Tags:  information-infrastructur... data-management backup recovery storage-blog archiving 1,560 Visits
Get ready for Pulse 2010, February 21-24 at the MGM Grand Hotel in Las Vegas. Pulse 2010 will be one of the most important storage and service management conference of the year, and one that will deliver the information you need to hear directly from your peers, our partners and your IBM Storage team. The conference will include an impressive storage management agenda covering everything from emerging storage technologies, architectures, back and recovery to archiving, and managing storage in virtualized data centers and server environments. Once again we are very excited to have your peers share best practices from multiple industries, geographies and companies of various sizes.
As your business and data centers continue to evolve, we continue to evolve and adapt our storage and information infrastructure management solutions to meet your growing needs and facilitate your journey to a dynamic storage infrastructure with innovative products and services that matter to your bottom line. Pulse 2010 provides us the opportunity to showcase our commitment to you, and you will see first hand how IBM's increased investment in Storage development has produced an aggressive and exciting roadmap that will expand and enhance our capabilities.
Detailed communications on the hotel and Call for Presentations will be coming your way shortly. The key to successful event is your participation and we hope you play an active role in the agenda. Please visit Pulse 2010 website for more details.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog ibmtivoli ibmstorage information-infrastructur... storage pulse2010 tivoli dynamic-infrastructure pulse service-management ibm 2 Comments 2,792 Visits
In preparation for Pulse 2010 in Vegas, I interviewed John Connor, the Pulse track lead for Storage and Information Infrastructure, to help you generate good ideas for submitting your call for speaker abstracts for Pulse. John will actually be reviewing the submissions with a team of other folks, so here is some advice that you can leverage to increase your chances of being accepted to speak at Pulse.
Me: What are the hot topics in the area of storage and information infrastructure today?
John: The hot topics in the area of storage and information infrastructure today are how, in today's tight economy, customers are leveraging storage in their information infrastructure to improve scalability, addressing the performance of their storage management assets, cutting capital expenditures by reducing duplicate data to lower storage capacity needs and simplifying the overall management of their storage infrastructure.
Me: Which topics would you like to see presented at pulse are
John: Ideally I would like to see sessions at Pulse that highlight customer success stories, how Tivoli storage management and/or IBM storage solutions helped customers address the challenges we discussed above.
Me: Who are good candidates for submitting abstracts and why?
John: The best candidates to talk about these successes are the folks who implemented them, which would be our customers. Customers are able to discuss their return on investment and how the IBM storage solutions are benefiting them in their everyday business operations. Another good candidate would be our business partners, accompanying and co-presenting with their clients on the IBM storage solutions they've implemented.
Me: What are you looking for in a good proposal?
John: As I mentioned earlier about the topics I would like to see presented, a good proposal is a customer success story around IBM storage solutions, including Tivoli storage management software, and/or storage hardware and storage services. This proposal should describe the initial pain points or problems that existed, how our solutions helped and the lessons learned that could be applied to other customer situations. This type of proposal and session at Pulse will help others learn from each other.
Me: What are the benefits of submitting an abstract for Pulse?
John: Submitting your abstract is a great way to gain visibility for your work, and your particular solution. Customers that submit abstracts and that are selected will receive a complimentary pass to attend Pulse at no charge ($1,995 value) and admission to on site VIP client lounge. Attending Pulse is not only a great way to share your companys success by implementing IBM storage solutions, but it is also a great education and networking opportunity.
Me: What is the deadline for submitting call for speaker abstracts?
John: The deadline to submit your abstract is Nov. 20th. Dont delay, submit your proposal today.
With such great guidance from John, youre sure to write a perfect proposal. If you have any questions on submitting abstracts for Pulse or want feedback on an idea, just leave a blog comment. Also, be sure to check out this justification letter if you need that extra edge to convince your boss of the value of attending Pulse. I hope to see you there!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm storage-software storage ibmstorage tivoli ibmtivoli storage-blog storage-management 1 Comment 5,776 Visits
Welcome to the Tivoli Storage blog.
We have gathered a team of SMEs from various areas of the business to discuss a variety of topics, spanning different interest areas including customer success stories, upcoming events, Business Partner spotlights, technical tips and tricks, product strategy, roadmaps and hot topics -- and of course, topics of interest to you!
Introducing the team!
BJ Klingenberg: Senior Technical Staff Member - Storage Software, IBM Software Group
BJ has over 25 years of storage software strategy and development experience. He has held various technical and management positions, nearly all of which have been related to storage software. His experience in Enterprise storage management includes DFSMS, DFSMShsm, DFSMSdss, and also Tivoli Storage Manager, Tivoli Storage Productivity Center (TPC) as well as System Storage SAN Volume Controler (SVC). He has also been involved in projects which apply ITIL management best practices to Enterprise Storage Management. BJ is currently focusing on storage archiving solutions. BJ is a graduate of the University of Illinois Urbana/Champaign where he received a Bachelor of Science degree in Computer Science, and holds a Master of Science Degree in Computer Science from the University of Arizona
Dave Rice: Business Partner Marketing, Tivoli Storage Software
Dave currently works in IBMs Worldwide Software Group where he drives Business Partner Marketing for Tivoli storage software and also has a focus on Asia Pacific and Japan geographies. In this role, Dave influences Business Partner sales pipeline through, lead/pipeline analysis, progression activities, partner communications, and implementing programs that provide Business Partner Opportunity Identification. Dave has been in a broad set of storage software marketing roles for the past 13 years, and has 35 years with IBM. Outside of IBM, Dave's interests include astronomy, as well as home and life improvement projects.
Del Hoobler: Senior Software Engineer
Del is a Senior Software Engineer that has worked for IBM for over 20 years in software design, development and services. For the past 13 years, he has worked on designing and developing software products for the IBM Tivoli Storage Manager (TSM) suite of products. Most recently, Del was the technical development lead for the TSM Windows snapshot (VSS) support for Microsoft Exchange Server and Microsoft SQL Server. Del enjoys working with people and helping solve their complicated IT problems.
Devon Helms is currently an intern with the IBM Tivoli Software group and a second year MBA candidate at the Paul Merage School of Business at UC Irvine. His studies are focus on business strategy and corporate finance. Before returning to the academic world to pursue his MBA, Devon was a business operations and technology consultant. He has been involved in hundreds of engagements, analyzing and improving his customers business processes. After his studies are complete, Devon wants to continue to help clients improve the performance of their businesses through business process and financial analysis. In his free time, Devon is an avid marathon runner, rock climber, and SCUBA diver. Devon lives in Lakewood, CA with his lovely wife, Shana and his 8 year old Siberian Husky and faithful running partner, Frosty.
Greg Tevis: Tivoli Storage Technical Strategist
Greg has over 27 years in IBM storage hardware and software development. He worked in ADSM/TSM architecture and technical support in the 1990s and was one of the original architects of IBM's storage resource management solution, Tivoli Storage Productivity Center (TPC). He currently has responsibility for technology strategy for all Tivoli Storage and was involved in all of the recent IBM Storage acquisitions including XIV, Diligent, FilesX, Novus Consulting, and Arsenal Digital.
Jason has been the product manager for the Tivoli Storage Productivity Center (TPC) family since joining IBM in 2006. Prior to joining IBM, Jason was a product manager at EMC and Prisa Networks, responsible for the road map and strategy of various storage management offerings. When not helping define the direction for TPC, Jason acts as the President for Classic Soccer Club, a youth soccer club where his son currently plays.
John Connor: Product Manager
John is the Product Manager for IBMs flagship data protection and recovery offerings, the Tivoli Storage Manager family. During Johns tenure as product manager, TSM has experienced strong growth; growing faster than the overall market, and gaining market share. Prior to joining the Tivoli Storage Manager team in 2005, John helped drive the business strategy for IBM Retail Store Solutions. Prior to that, John had product and marketing roles in various IBM software businesses including WebSphere and networking software. John has an MBA from Duke University and an undergraduate degree in electrical engineering from Manhattan College. In his spare time, John enjoys competing in triathlons and has successfully completed an Ironman triathlon.
John R. Foley Jr.: Product Marketing Manager
John is currently a marketing manager within IBM's Tivoli storage software marketing team. John has over 20 years of experience in the areas of storage hardware, storage software and system networking. He has held positions in management, product line management, strategy, business development and marketing. In the past 10 years, he has served on multiple storage projects including SAN storage (fibre channel & iSCSI), Network Attached Storage (NAS) and fibre channel switch offerings. Most recent projects include the introduction of IBM's System Storage N series portfolio stemming from the NetApp OEM agreement and the release to market of IBM's newly introduced Tivoli Storage Productivity Center Version 4 and IBM Information Archive Version 1.
Kelly Beavers: IBM Storage Software Business Line Executive
Kelly joined the IBM Storage Software team in 2004 as Director of Strategy and Product Management for Storage Software and Solutions. Her team is responsible for guiding the development and release of products that capitalize on market/technology trends, and for defining and executing tactical go-to-market plans for IBM storage software solutions across both the Tivoli and Systems Storage brands. Kelly has 28 years with IBM where she's held a variety of roles including Finance, Pricing, Tivoli Channel Development, Director of Customer Insight, managing Market Intelligence, Customer Relations and Marketing Operations. Kelly is married with two daughters, ages 19 and 12.
Matt Anglin: Tivoli Storage Manager Development
Matt has been a member of the Tivoli Storage Manager Server Development Team for 15 years. His areas of expertise include data movement to and within the server, deduplication, shredding, and DB2 interactions. He is the AIX platform export in TSM, and is knowledgeable about other Unix, Linux, and Windows plaforms. Matt lives in Tucson, Arizona.
Matthew Geiser: Manager, Storage Software Product Management
Matt joined IBM in 2001 and has worked in product management and product development for Storage Software offerings including SAN Volume Controller, Tivoli Productivity Center, Tivoli Storage Manager and IBM Information Archive. Matt's current responsibilities include managing the product management team for the storage infrastructure management offerings. Prior to IBM, Matt worked in a variety of operations, project management and software development roles in the banking and energy industries.
Milan Patel: Senior Product Marketing Manager
Milan is responsible for Product Marketing of IBM storage software for virtualized server environments, storage clouds and of course every day issues in storage management like backup, recovery, archiving and replication. Milan has been with IBM for over 6 years working in server and storage systems and storage software marketing groups. Prior to that, Milan spent 13 years in various capacities from development to product management of various server subsystems and systems management.
Richard Vining: Product Marketing Manager
Rich is the Product Marketing Manager responsible for the IBM Tivoli Storage Manager portfolio of products. Rich joined IBM in April 2008 as part of the acquisition of FilesX, where he served as Director of Marketing. Rich has more than 20 years of experience in the data storage industry, holding senior management roles in marketing, alliances, customer support and product management at a number of leading edge companies, including Signiant, OTG Software, Plasmon and Cygnet. Rich enjoys eating, drinking, travelling and golfing (but doesn't everybody?)
Rodney Fannin: Worldwide Channel Manager, Tivoli Storage Software
Rodney has over 15 years of experience in working with Business Partners. Primary responsibilities include refining the channel strategy for Storage software and developing sales and marketing tactics to increase reseller revenue worldwide. Rodney is also a contributing author for the BP Spotlight on our blog.
Roger Wofford: Product Manager
Roger is currently a Product Manager in Tivoli Storage Software. He has experience in Manufacturing, Development, Marketing and Sales within IBM. He enjoys golf, swimming and the Rocky Mountains. Roger plans to blog about how customers use archiving solutions in their storage environments.
Ron Riffe: IBM Storage Software Business Strategist
Ron is currently the business strategist for IBM Storage Software. During the last six years, Ron has been devising and implementing IBM's storage software strategy with a focus on creating greater client value through integrating IBM storage software and storage hardware offerings. Ron has managed storage systems and storage management software for more than 23 years, holding positions in senior management, product line management, strategy and business development for both IBM System Storage and IBM Tivoli Storage. Ron has written papers on the synergies of storage automation and virtualization and frequently speaks at conferences and customer locations on the subject of storage software. Prior to joining IBM, Ron spent 10 years as a corporate storage manager for international manufacturing firm Texas Instruments after receiving a B.S. in Computer Science from Texas A&M University.
Shawn Jaques: Manager, IBM Tivoli Storage Product Management
Shawn has been in his current role as manager of storage software product management for nearly three years. The team is responsible for product strategy, content, positioning and pricing of IBM storage software solutions. Prior, Shawn had product and market management roles in other Tivoli product areas as well as a stint in Tivoli Strategy. Before joining IBM, Shawn was a Consulting Manager at Cap Gemini consulting and an Audit Manager at KPMG. Shawn has a Master of Business Administration from The University of Texas at Austin and a Bachelor of Science from the University of Montana. He lives in Boulder, Colorado and enjoys fly-fishing, skiing and hiking with his wife and kids.
Terese Knicky: Analyst Relations Tivoli
Terese is with Tivoli's analyst relation team covering Storage, System z, Job Scheduling and IBM's General Enterprise solutions. Terese was born and raised in Omaha, NE and transplanted to Texas where she enjoys watching her two boys play college football.
And finally, let's talk about me. I'm Tiffeni Woodhams and I have been with IBM for nearly seven years. Currently, I am a Tivoli Storage Marketing Manager where I am responsible for general marketing activities, ranging from pipeline measurement and tracking, providing marketing execution guidance and communications to the geography teams; Tivoli Storage Social Media lead and co-lead for IBM Storage Social computing strategy. I also work on major launches like Dynamic Infrastructure and Information Infrastructure providing the storage messaging and linkages. Prior to this role, I have held several other marketing positions including Tivoli Provisioning Go-to-Market Manager, Benelux Software Marketing Manager focusing on Tivoli, WebSphere, and Lotus, Americas Tivoli Marketing Manager, and Tivoli Launch Strategist. In my spare time, I enjoy playing sports (basketball, softball, and golf), coaching JV girls basketball, riding horses, and spending time with family and friends.
Now that you know a little background on each of the team members, we hope that you will let us know some of your interest areas when it comes to IBM Storage and IBM Tivoli Storage Software solutions. Please post comments to this blog and let us know what you want to hear about.
Some topics we will be discussing in the next month include:
Pulse 2010, the Premier Service Management Event
Data Reduction - the steps to get to where you want to be
Archiving - why you need to do it
Unified Recovery Mangement
New Product announcements and roadmaps.
Thanks and we look forward to hearing your feedback.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management space-managment data-reduction hsm deduplication backup archive 3,131 Visits
Data Reduction Chapter 1: The challenges posed by the tidal wave of data
We're storing and using more data than ever before. The volume of data is growing exponentially, government regulations are expanding and competitive pressures are increasing forcing us to retain more of our data for longer periods of time. But our budgets are flat or being cut. And as we become more dependent on digital information, the costs of losing any of it are increasingly painful. The bottom line, of course, is that we need to do a better job of managing our data assets, and as these assets grow and our budgets shrink, we need to do more with less. So we need smarter solutions.
Storage administrators are on the front lines of the Tidal Wave of Data battle. Some of the challenges from data growth that administrators are struggling with include:
- It takes longer to perform backups; often not completing within backup window allowances; some data is not being adequately protectedIBM can help you build a dynamic storage management infrastructure that will enable you to cope with all of these challenges. We have solutions to help reduce your data storage footprint, and the goals that we set out in these solutions are: to reduce your capital and operational costs; to improve your application availability and service levels; and to help you mitigate the risks associated with losing data and a rapidly changing environment.
With these solutions you should: need less storage; have less data to manage; experience less downtime; and be more competitive. To learn more, please visit the Data Reduction Solutions web page and stay tuned for Chapter 2, where we will outline a holistic and comprehensive approach to data reduction.
Richard Vining 2700019R2A email@example.com Tags:  archive storage-blog data-management deduplication hsm backup data-reduction space-managment 2,356 Visits
Data Reduction Chapter 2: Surviving the tidal wave of data - options for data reduction
In chapter 1, we discussed the struggles that storage administrators are having with the tidal wave of data. In this chapter, well begin talking about how data reduction technologies can help you survive and even thrive in the face of these challenges.
IBM takes a holistic approach to data reduction, unlike competitors that offer point solutions to problems that they may in fact be causing. For example, a huge contributor to data growth is the repeated duplication of large amounts of data every time you perform a full backup.
So, one option is to avoid data growth from unnecessary data duplication, by only backing up data that has changed since the last backup. This addresses the cause of the problem, not the symptom. For example, if you have a 5 percent per week data change rate, 95 percent of your data didnt change this week. If you perform a full backup on that this weekend, youre duplicating almost everything you backed up last weekend. Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. Its no wonder that data deduplication products are so popular they were designed to eliminate all this duplicate data. And when they claim to reduce your backup storage footprint by 95 percent or more, this is exactly the data that theyre talking about.
Another option is to determine what different types of data you have and categorize it so that you can manage it most effectively, by moving less frequently-accessed data to lower-cost tiers of storage, and by deleting data that you no longer need or want. This will shorten your backup cycles and improve application performance, as well as reduce or delay the need to buy more primary storage capacity.
A third option is to put automated processes in place, based on policies that meet business requirements and/or service level agreements, to migrate, archive and delete data. There are several actions that can be taken on your data files based on criteria such as age, how long it has been since last access, which application created it, etc. These automated solutions can include:
Transparent migration of data from production storage systems to a hierarchy of secondary systems; the data remains on-line and available without any modifications to applications.
Archival of data, removing it completely from production systems and storing it in secure storage where retention policies can be set and managed.
Expiration of data, deleting it from all storage once it no longer needed or to meet corporate governance policies.
The last option is to compress and deduplicate the data you end up putting into your data protection and retention systems. Data deduplication is the most popular technology in this category, and well discuss it and the other technologies mentioned above in greater detail in future chapters of this blog.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for Chapter 3 in which we'll dig into the first step in effective data reduction.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-management storage-blog backup data-reduction deduplication hsm space-managment archive 3,882 Visits
Data Reduction Chapter 3: Avoiding data duplication
Not only does that take a lot of storage capacity, but it also takes a long time and these problems only get worse as you create more new data. (Its no wonder that data deduplication products are so popular; they were designed to eliminate all of this duplicate data. And when they claim to reduce your backup storage footprint by 90 percent or more, this is exactly the data that theyre talking about.)
But what if you never had to perform a full backup again after the initial one? If you backed up only the new and changed data always you wouldnt be creating all that duplicate data that needs an expensive deduplication solution to undo. Shorter backup windows, less storage required, and reduced storage acquisition costs would all be benefits of eliminating that weekly full backup. So would faster restore times, since deduplicated data wouldnt need to be re-hydrated in order to be useful.
IBM has smarter solutions that can help prevent the need to perform full backups. The products in the IBM Tivoli® Storage Manager portfolio of recovery management solutions all provide incremental-forever backups.
These are the common backup methodologies and how they compare on backup and restore processing:
Full + incremental
Backup This requires a full backup and then incremental backups over time usually a full backup each weekend with incremental backups for the following six days. Only data that has changed from the day before is transferred to tape. Then at the end of the week another full backup must be run.
Restore The full backup must be restored, then each days incremental data applied to it. This means that if you have a full backup and three incremental backups of the same file, it will be restored 4 times. It is a waste of time and money, and introduces risk.
Full + differential
Backup This requires a full backup and then differential backups over time usually a full backup each weekend with differential backups for the following six days. This means that all data that has changed since the last full backup will be backed up. If you assume a 10 percent daily change rate, then you will backup 100 percent (full) on the first day, 10 percent on the second, 20 percent on the third, 30 percent on the fourth, 40 percent on the fifth, 50 percent on the sixth, and 60 percent on the seventh. That means that you are backing up 260 percent of your data every week! Youll need 10 times your production capacity for just a month of backups.
Restore You would restore the full backup and then the last differential up to the date you were restoring to. This is faster and more reliable than the Full + Incremental model, but at the cost of much more storage capacity.
Backup This requires a full backup the first time you back up, and then only incremental backups. There are no extra transfers of data, which saves network bandwidth and transfer time, makes backup and restore faster, and can save thousands of dollars in disk and tape costs.
Restore You select the point-in-time that you want to restore from, and then restore the necessary files just once. This is much faster than with the other two methods.
The analysis shown in the figure above starts with 2TB of data and adds or changes 200GB per day. The assumption is that a full backup has already been performed to set the base.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 4chapter 4, where well cover the discovery and categorization of data to help move it intelligently throughout its lifecycle.
Delbert Hoobler 1000008PR6 email@example.com Tags:  tsm tivuser data-management software storage-blog storage fcm backup 1,973 Visits
Come join me for "Ask the Experts online Jam"!
What is the "Ask the Experts online Jam"?
The "Ask the Experts Online Jam" is a valuable opportunity for the YOU to connect with 75+ real world IBM experts on 30+ Tivoli products. These experts, many from IBM development, are recruited to answer your questions for a concentrated period of 12 hours. (8am eastern - 8pm eastern USA)
When is the "Ask the Experts online Jam"?
November 12th 2009 - 8AM - 8PM Eastern USA. To find the time in your city check out the World Clock meeting planner website.
Here's how it works in brief:
Step 1: You have a question - usually fairly technical;
Step 2: You find the expert that is best suited to answer the question by browsing for an expert by pre-defined category and product specific;
Step 3: You fill in a field on the "Ask the Experts online Jam" web application to submit the question.
Step 4: You receive an email answer to you question(s) and the Ask the Expert JAM web application is updated for other members to see.
Ask questions to over 75+ IBM experts on the following 30+ topics:
Datacenter Management tools: IBM Tivoli Monitoring, IBM Tivoli Composite Application Manager for Transactions and WebSphere/J2EE, Tivoli Application Dependency Discovery Manager, Tivoli Provisioning Manager, Tivoli Service Request Manager,
Network, Service Assurance and Events: Tivoli Netcool Impact, Tivoli Netcool Performance Flow Analyzer, Tivoli Netcool Performance Manager, Tivoli Netcool/OMNIbus, Tivoli network Manager, Tivoli Network Manager (Precision and NetView/d),
Asset Management: Asset Management for IT and Enterprise, Enterprise Asset Management Trends and IBM Maximo Industry Solutions,
Security: Tivoli Access Manager, Tivoli Identity Manager, Tivoli Federated Identity Manager, Tivoli Enterprise Acces Manager Single Sign On, Tivoli Compliance Insight Manager, Tivoli Directory Server, Tivoli Key Lifecycle Manager, Tivoli Security Information and Event Manager, Tivoli Security Policy Manager,
Storage: Tivoli Storage Flash Copy Manager on AIX and Windows, Tivoli Storage Manager, Tivoli Storage Productivity Center, Tivoli Storage Mangaer (TSM) Fastback,
z/OS: Netview for z/OS, OMEGAMON, Tivoli Security for Systems z: Tivoli zSecure Suite
Click here for more information.
I personally will be available from 8am to 2pm covering IBM Tivoli Storage FlashCopy Manager on Windows but there will also be many other storage experts available for the entire 12 hours. Please join us!
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  backup deduplication storage-blog space-managment archive data-management hsm data-reduction 2,363 Visits
Data Reduction Chapter 4: Categorize your data for migration & deletion
In the last chapter, we discussed eliminating the one of biggest causes of data growth the duplication of large amounts of data every time you perform a full backup. In this chapter, well explore the benefits of determining what different types of data you have and categorizing it so that you can manage it most effectively. This will help you set up policies to migrate of less frequently-accessed data to lower-cost tiers of storage, and to delete the data that you no longer need or want. By cleaning out your production storage, you will shorten your backup cycles, and improve application performance.
The next option for reducing the data storage footprint is to assess the different types of data and where they are in the data life cycle. If your organization is like most, you have all your unstructured data in flat file systems, which are probably full of data that you rarely, if ever, need to access. This may include data you are no longer required by law or policy to keep, but that you havent deletedsuch as old e-mails and memosthat could prove costly if discovered in legal proceedings.
The goal is to identify what data can be moved to less expensive tiers of storage, and what data can be deleted entirely from the environment. This will reduce the need to buy more primary storage capacity and make it easier to manage and protect what you have. Backup and restore performance will improve, and it will be easier to prove that you are meeting data retention and expiration policies.
IBM offers IBM Tivoli Storage Productivity Center for Data for this purpose. This solution reports on where your data is, sorted by access or saved dates, who owns it, the application that created it, and numerous other filters. From the intelligence you gain from these reports, you can set meaningful policies in your data management software to automatically take the appropriate action on data that shouldnt be clogging up your primary systems. Tivoli Storage Productivity Center for Data can also help identify and eliminate duplicate data, orphan data, temporary data and non-business data.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 5, where well talk about automating the migration, archival and expiration of your data.
Shawn Jaques 1200007FSY email@example.com Tags:  green-it storage-blog storage-management storage energy-effeciency green storage-software energy 2,252 Visits
Living in Boulder, Colorado, I am constantly hearing about "green" initiatives such as recycling, composting, alternative transportation, etc. Over the past several years, my family has been doing a much better job of lessening our impact on the Earth through things such as recycling, buying environmentally friendly products and even signing up for energy saving smart grid technology.
I appreciate when corporations also do their part to reduce their environmental impact by leveraging greener technologies. But let's face it, most corporations act based on the impact to the bottom line (both real or perceived) rather than the impact to the environment. Companies like IBM can make the decisions easier for clients by building products that improve performance while reducing energy or other environmental impacts.
I'm proud when IBM delivers "green" technology and thus wanted to point your attention to this video about energy efficient storage. Craig Smelser, VP of Security and Storage Development at IBM Tivoli, introduces some of the storage challenges that can be addressed with energy efficient IBM storage software solutions.
For more information, click here
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  storage-blog data-management archive data-reduction backup deduplication space-managment hsm 2,187 Visits
Data Reduction Chapter 5 - Automated Data Migration
In previous chapters, we’ve talked about the need to reduce your data storage footprint in order to help survive the tidal wave of data, and the first steps in doing so include eliminating unnecessary duplication of data, and then categorizing your data so you can make smarter decisions on where to store it, and for how long.
In this chapter, we take the next step by automating these data management policies through three distinct processes: migration, archival, and expiration. The net result of these processes is to remove unneeded data from your production storage systems, which will reduce or delay your need to acquire more expensive hardware and reduce administrative costs, all without impacting key operational processes.
In the old days of computing and storage management, the concept of transparently moving data from one tier of storage to another was called hierarchical storage management, or HSM. Given IBM’s heritage in mainframes, we still use that term today. More recently, this concept morphed into Information Lifecycle Management (ILM), but it’s the same basic principle – move older, less-frequently accessed data off your most expensive storage devices onto slower, less costly storage media.
HSM and ILM solutions work transparently in the background, automatically selecting and moving files from primary to secondary tiers of storage based on the policy criteria that you set, such as file size or length of time since a file has been opened. They leave a pointer, or stub file, where the data was originally stored so that users and applications don’t need to worry about where the data was moved; the software transparently reroutes the request for any moved files. These solutions automatically move data to the proper media based upon policies you set, freeing up valuable disk space for active files and providing automated access to the migrated files when needed.
Data migration solutions help customers get control of, and efficiently manage, data growth and its associated storage costs by providing automated space management. These solutions should provide the following key features:
• Storage pool “virtualization” helps maximize utilization of the managed storage resources.
• Restore management is optimized based on the location of the data in the hierarchy.
• Migration is transparent to the users and to applications.
• Migrations are scheduled to minimize network traffic during peak hours.
• Automatic migrations occur outside the backup window.
• By setting proper threshold limits, annoying ‘out of disk space’ messages can be eliminated.
The IBM Tivoli Storage Manager (TSM) family includes two solutions for automating the migration of data between multiple tiers of storage. TSM 6 for Space Management is for AIX, HP-UX, Solaris and Linux data, while TSM HSM for Windows is for Windows servers.
Tivoli Storage Manager data migration solutions not only help you clean up your primary storage systems to help them run more efficiently, they can also be used to easily move data to new storage technologies as they are deployed. Migrating files to Tivoli Storage Manager also helps expedite restores, because there is no need to restore migrated files in the event of a disaster.
The benefits of Hierarchical Storage Management or Information Lifecycle Management include:
• Improve response times of file servers by off-loading inactive data
• Slow or even stop the growth of your production storage environment
• Use existing storage assets more efficiently
• Reduce backup times and resource usage by focusing on active files only
• Eliminate manual file system clean-up activities
In the next chapter, we’ll look at HSM’s big brother – archiving.
The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.
Richard Vining 2700019R2A email@example.com Tags:  hsm space-managment storage-blog backup archive data-management deduplication data-reduction 2,451 Visits
Data Reduction Chapter 6 - Archiving
I’m back with the next installment on ideas for helping you to reduce the amount of storage capacity you need for an ever-increasing amount of data, and the amount of time you spend managing it. The last chapter covered transparently automating the migration of data from primary storage to secondary systems. An extension of this thought is archiving.
Archiving is another important data reduction technique for certain types of data. One example of this would be financial reporting data (such as weekly, monthly, quarterly, annual data), that needs to be retained for future trending, requirements or auditing, but does not need to consume valuable disk space where live data should reside. Historical medical records and customer statements also often fit into this category.
Archiving is for long-term record retention. It differs from backup in that it keeps files for a specific amount of time (where backup keeps a certain number of versions of a file) while removing the data from the primary production storage systems completely.
Key features of IBM archiving solutions include:
Using IBM archiving solutions for records retention can help you:
IBM offers a choice of solutions for archiving, depending on customer preferences and the applications involved.
Tivoli Storage Manager 6 includes an archiving capability directly integrated into its client backup software. It is policy based, allowing the administrator to set retention times. If the requirement for how long a file must be retained changes, all the administrator has to do is update the policy, and the solution will retroactively update the already archived files; there is no need to restore and re-archive, as some competitive offerings require. Tivoli Storage Manager also offers the option of integrating data from many different applications into your archive repository, and the archive repository can be a virtualized pool of heterogeneous storage systems.
IBM Information Archive, which contains a specialized version of Tivoli Storage Manager called IBM System Storage™ Archive Manager, is a standalone archive appliance that ingests data directly from more than 40 applications including messaging, healthcare and medical imaging, design and engineering, document management, and others.
Database archiving with IBM Optim and Tivoli Storage Manager
IBM Optim™ Data Growth Solution is a unique database archiving solution that transparently migrates unneeded records from database tables to secondary storage. Like Tivoli Storage Manager’s space management and archive solutions, Optim provides database and storage administrators with a range of cost and performance benefits.
There are also benefits to using Tivoli Storage Manager in conjunction with Optim, which works seamlessly with Tivoli Storage Manager’s application program interface (API) to move archived database records directly into Tivoli Storage Manager’s storage hierarchy.
Optim can also be used with other file-based backup/restore products; however, this involves a two-step process to first archive the data and then back it up. When used with Tivoli Storage Manager, Optim automatically archives database records and then uses the API to store/archive data in a Tivoli Storage Manager storage pool hierarchy. With any other file-based backup/restore product, Optim uses standard file operations to store/archive data in a disk-based file system, and then the backup product can backup the file to supported backup media.
Using Optim and Tivoli Storage Manager together allows you to:
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 7, where we’ll talk about data deduplication and compression as the next options in an effective, holistic approach to reducing your overall data storage footprint.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  scv ibmstorage storage-blog protectier deduplication virtualization storage 3,090 Visits
You've probably heard your mother say "you never get a second chance to make a first impression". So, since today marks my first entry into the blogosphere, I wanted to hit a home run, providing not only some interesting perspective, but also some hard facts that readers can use to potentially save some time and money.
If you have been paying much attention to developments in storage and computing infrastructure in the last few years, you have noticed a significant trend toward virtualization. Servers aren't servers any more, they are virtual machines. Tapes aren't tapes any more, they are virtual tape libraries like the IBM TS7650 ProtecTIER Deduplication Appliance. And in the area of disk virtualization, the most widely adopted approach is the IBM SAN Volume Controller (SVC).
Up until now, disk virtualization has been an enterprise-wide thought. Storage managers who are tasked with taking care of hundreds of TB's, and often PB's of disks have for years turned to SVC to help eliminate the pain of migrating data between arrays. For these administrators, disk virtualization with SVC has also helped provide a common set of management interfaces and proceedures across storage from different vendors, and has helped to create a common set of services like thin provisioning, snapshotting, and mirroring across different tiers of storage.
Not every storage manager, though, is responsible for PB's, or even hundreds of TB's of storage. Most administrators are just looking for an affordable and 'easy to manage' means of satisfying the next request for more storage on Exchange, or SAP, or... About a month ago, IBM introduced some important changes in its mid-range disk virtualization product, SVC EE, designed with these storage managers in mind.
Perhaps the best way to describe these changes is with a picture... (Click on the picture to enlarge)
One of the challenges with traditional disk arrays is that they are relatively inflexible. Think about it... the arrays that have a lot of function (thin provisioning, excellent snapshotting, mirroring, etc.) are generally large, monolithic things that can take up a lot of real estate and burn a lot of power before you get to the first byte of storage. On the other hand, the arrays that are more modular -- allowing incremental growth -- generally don't offer the best software capabilities. And what's more, all of them generally charge an arm and a leg for the software capabilities they do offer.
The important thing IBM did was to package its virtual controller software in an affordable form factor and price it in such a way that mid-sized administrators can build and grow their storage infrastructure modularly. Do you need more disk capacity for a new application? Add an IBM DS3400 SAS disk enclosure. Do you have plenty of capacity but just want some more performance or connectivity? Add an SVC 8A4 controller pair. Do you have plenty of performance but just want some more capacity for archiving? Add a DS3400 SATA disk enclosure. With this sort of modular approach to scaling, the incremental cost of adding capacity can be greatly reduced.
Regardless how you choose to grow your virtual disk system, there are a valuable set of services that are all included in the base software license (e.g. no extra charge). They include:
Although I have used IBM DS3400 disk encousures in my example, a virtual disk system of unlimited size can be constructed using any number of IBM DS3400, DS4000 or DS5000 family disks. SVC EE can also virtualize up to 250 disks from other IBM or non-IBM disk systems.
Lower incremental cost for adding capacity. Efficient SAS and SATA disks. A valuable set of software functions included in the base price. Common management from the smallest configuration to the largest. Would that help save some time and money?
Richard Vining 2700019R2A email@example.com Tags:  hsm data-management backup storage-blog archive data-reduction space-managment deduplication 3,155 Visits
Data Reduction Chapter 7: Data Deduplication
As discussed earlier chapters, data deduplication is a hot technology that is used to reduce data storage capacity requirements. If you employ smart choices in backup and data management processes, you might not need data deduplication. But if you keep all of your inactive and unimportant data on your production storage systems, and use backup software that forces you to perform repetitive full backups of all that static data, then data deduplication can provide you with a huge benefit.
The basic idea behind data deduplication is to store just one copy of any data object, and place pointers to the single copy wherever duplicates are eliminated. Some solutions do this at a file level, so that the files have to be exactly the same to be deduplicated. This is often called single-instance storage (SIS). Other solutions deduplicate data at a fixed or variable block length. IBM’s solutions use a blended approach based on the size of the data—file-based for smaller files, and variable block for larger files.
Most deduplication solutions run a checksum algorithm against the selected data to create a hash signature, then check to see if that signature has ever been seen before. If it has, the data is discarded and a pointer to the already stored data is put in its place. A small number of high-end solutions perform a complete byte-level differential comparison of the data to remove all potential for “data collisions,” where two distinct data blocks may share the same hash signature.
Data deduplication can and does occur at many points in the data creation and management life cycle. In general, these points of deduplication can be broken into source-side, where the data is created, and target-side, where it is stored and managed. Backup applications, for example, can perform source-side deduplication by not transferring data that has previously been backed up over the LAN or WAN, saving on bandwidth.
On the target side, the most popular use of deduplication is in virtual tape libraries, or VTLs. These disk-based systems emulate tape libraries and drives, but apply deduplication to store equivalent amounts of data on disk very cost-effectively while providing performance advantages over tape. Performing deduplication on tape-based systems is considered to be a bad idea, given the portable nature of tapes and the need to recycle them over time; it would be very difficult to guarantee that you maintain the original data for all of the pointers that are out there.
Today, IBM offers two compelling data deduplication solutions. The Extended Edition of Tivoli Storage Manager 6 includes deduplication capabilities to eliminate duplicate data that has been backed up from multiple production systems. Again, TSM’s progressive-incremental backup methodology does not create massive amounts of duplicate data, so the deduplication is only effective when the same data exists on different systems.
The other solution is the IBM System Storage ProtecTIER® family of deduplication systems for reducing data coming from multiple sources, including Tivoli Storage Manager servers, backups from other backup systems, or archive software solutions.
A lot of customers ask when they should use TSM deduplication and when they should use ProtecTIER. I’ll cover this question in detail in my next blog, but the simple answer is:
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  storage-software tivoli storage storage-blog storage-management tsm 8 Comments 8,272 Visits
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
I encourage you to take a look at Windows VSS snapshots and FlashCopy Manager to see how they might help you. Enjoy!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  warming our mgm global summit gore tivoli pulse vegas climate las rational websphere management smarter copenhagen crisis choice ibm al service planet 1,733 Visits
In response to: Cooler Planet Crusader-In-ChiefI agree that the guest speaker selection makes a lot of sense with regards to building a smarter planet. It will be interesting to hear what Al Gore has to say.
Richard Vining 2700019R2A email@example.com Tags:  data-management backup deduplication data-reduction hsm archive storage-blog space-managment tivoli 3,647 Visits
Data Reduction Chapter 8: Deduplication with Tivoli Storage Manager 6, FastBack and ProtecTIER
So far in this series, we’ve detailed the challenges that the tidal wave of data is placing on storage administrators, and how a smarter, more holistic and comprehensive approach to data reduction is needed to survive in a way that let’s you do more with less.
We covered eliminating the largest source of duplicate data (full backups) and automating the migration, archiving and deletion of older data. Then, in chapter 7, we covered the basics of data deduplication. Now we’ll detail the differences between IBM’s deduplication offerings, and when to best use each.
Let’s talk first about the deduplication capabilities of Tivoli Storage Manager (TSM). This feature is included at no additional charge for TSM 6 Extended Edition customers. This solution can help to reduce recovery times by enabling you to store more backup data and recovery points on disk rather than tape. It works with the data from all sources – via normal backups, data imported via the TSM API, as well as archive and HSM data. TSM deduplicates your disk-based data pools as a post-process, so there is no impact on backup performance. After running, it automatically reclaims the storage that has been freed up.
TSM already eliminates the most common cause of duplicate data – full backups – so the reduction ratios you can expect from TSM’s deduplication solution are fairly modest – the average is about 40%. But when combined with its progressive incremental backup approach and built-in data compression, TSM’s effective data reduction rate is extremely competitive with any other solution on the market, as has been detailed in a commissioned report written by Enterprise Strategy Group (ESG), available here (fair warning – registration required – sorry):
Announced today, Tivoli Storage Manager FastBack v6.1 also includes target-side data deduplication to help reduce the capacity required in the FastBack backup repository, adding to its value as the leading near-instant recovery solution on the market for business critical Windows servers and remote/branch offices. Also announced today was Linux support and tighter integration with the Tivoli Storage Manager Integrated Solutions Console (ISC), delivering on IBM’s vision of true enterprise-wide Unified Recovery Management.
IBM System Storage ProtecTIER is a technology leader in performance, scalability, data integrity and reliability. In true apple to apple comparisons this solution is the fastest on the market in real customer environments. A single ProtecTIER system can easily scale in both performance (1000MB/sec) AND capacity (1PB of deduplicated data). ProtecTIER is one of the few solutions that doesn’t rely on a hash algorithm and performs a byte level differential to ensure data is a duplicate for enterprise class data integrity. And ProtecTIER features all IBM best of breed components versus inexpensive OEM'd parts found in competitive products.
ProtecTIER has been proven in very large production environments and is supported worldwide by IBM’s services operations. The TS7650 ProtecTIER Deduplication Family ranges from small (7TB) to medium (18TB) to large-scale (36TB) appliances. And the TS7650G gateway offerings allow you to add the storage of your choice, up to 1PB. Active-Active cluster configurations also provide high availability capabilities.
Video on ProtecTIER: http://www.youtube.com/watch?v=6Uk41HpCTqo&feature=related
Review - Choosing TSM or ProtecTIER for Data Deduplication
While TSM works very well in ProtecTIER environments, you wouldn’t use both TSM deduplication and ProtecTIER deduplication simultaneously. That would require twice as much work for no additional benefit. So when should you choose one over the other? Both solutions offer the benefits of target side deduplication: greatly reduced storage capacity requirements (especially when using TSM’s progressive incremental backup). You’ll have lower operational costs, energy usage and Total Cost of Ownership. You also get faster recoveries with more data on disk.
Use TSM 6 built-in data deduplication when you desire that deduplication operations be completely integrated within TSM. You want the benefits of deduplication without the costs of separate hardware or software – it ships for free with TSM 6 Extended Edition. Or you desire end to end data lifecycle management with minimized data store requirements.
Use ProtecTIER when:
• You need the highest performance up to 1000 MB/sec or more
• You have a large amount of data and need scalable capacity and performance
• You need inline deduplication to avoid the operational impact of post processing
• You are deduplicating across multiple TSM (or other backup) servers
• You don’t have TSM and are performing weekly full backups.
To learn more, please visit the Data Reduction Solutions web page and stay tuned for chapter 9, where we’ll summarize IBM’s holistic approach to data reduction and show you how we can help you survive the tidal wave of data.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli continuous-data-protectio... ibmstorage tivoli-storage-manager-fa... ibm tsm-fastback data-protection 1,868 Visits
New Product Announced Dec. 15, 2009
IBM Tivoli Storage Manager FastBack for Workstations is an automated, continuous data protection and recovery software solution for desktop and laptop computers, with central management for thousands of systems, and integration with other Tivoli Storage Management offerings.
Here is the URL for this bookmark: http://www-01.ibm.com/software/tivoli/products/storage-mgr-fastback-workstation/
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  information infrastructure tivoli ibm management 2010 dynamic websphere pulse service-management rational 1,929 Visits
In response to: The BIG Questions at PulseThose are great questions.
Additionally, you should consider asking yourself these questions that relate to, "What's the Value of this Data to the organization?"
1. Do you have a plan for recovery of that data if lost or corrupted?
2. How fast is that data growing and how are you dealing with the growth?
3. How are you providing increasing service levels with lower cost?
By attending the Storage and Information Infrastructure track at Pulse 2010, you'll find the answers to the questions I've added along with answers to any additional questions you may have concerning your storage, data, and information management.
Take a look at the video below and see how Tivoliman Tames the Data Juggernaut Beast.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  deduplication hsm backup data-reduction tivoli storage-blog space-managment data-management archive 1,575 Visits
Data Reduction Chapter 9: Surviving the tidal wave of data with IBM data reduction solutions
I hope everyone had a safe and enjoyable holiday, and I’m looking forward to an exciting and prosperous new year. I’d like to take this opportunity to summarize the topics I’ve been covering in this series of data reduction blogs, and give new readers links to the specific topics that you might be interested in.
Please ask yourself these questions:
Through this series, we’ve shown that IBM is the only vendor with a comprehensive set of data reduction solutions that can be applied at multiple points throughout the data creation and management lifecycle. IBM’s broad portfolio of data reduction solutions gives us the freedom to solve your data storage and management issues with the most effective technology for your particular situation. And IBM is continuing to invest in research and development to further develop and deliver the advanced features our customers are requesting.
To learn more, please download my new Data Reduction whitepaper, view the on-demand webcast with Nick Allen, or visit the Data Reduction solution site.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
John Foley 12000084U0 FOLEYJOH@US.IBM.COM Tags:  storage-management tivoli ibm storage-blog patel milan cloud ibmstorage video storage-cloud 2,532 Visits
Cloud storage technologies made impressive strides in 2009, and the trend looks to build on that momentum in 2010. IBM is expecting steady growth in cloud storage deployments, especially in the areas of test environments, Web serving, and other non-mission-critical scale-out storage needs.
Standards in this area are just beginning to be discussed and will also be evolving in 2010. Standard file protocols such as CIFS and NFS are obvious starting points for cloud storage access, but other approaches utilizing object storage techniques have also been proposed.
To prepare for cloud storage within the data centers, IT managers will need to determine a small number of focused areas to use as starting points. In the short term, cloud storage is a technology that will be deployed to address specific and unique requirements across the enterprise. Therefore, it is recommended to carefully choose areas to pilot the technology where managers can gain insight into where they can extend usage into other areas and build skills for when it becomes more widely deployable.
Watch this video to gain important insight into what it takes to deploy and manage high available Cloud Storage environments.
Click here for additional information about IBM Cloud Storage Solutions
Ron Riffe 100000EXC7 email@example.com Tags:  tivoli storage-blog storage-management tsm recovery-management tpc fastback ibmstorage storage-software virtualization storage svc 1,921 Visits
Yesterday, in discussing IBM's fourth quarter 2009 financial results, IBM CFO Mark Loughridge had this to say about Storage Software:
"Tivoli storage continued its robust growth as customers manage their rapidly growing storage data. Data Protection as well as Storage Management grew double digits, with broad-based geography and sector growth."
If you are already benefiting from IBM Storage Software - Thank you!!
If you haven't yet started taking advantage of IBM Storage Software, come visit us.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli information-infrastructur... storage virtualization pulse ibmpulse pulse2010 data-availability data-management tsm backup-recovery dynamic-infrastructure data-recovery ibmstorage ibm data-reduction storage-blog data-protection tivoli-storage 2,834 Visits
With only 4 weeks until Pulse 2010 - The Premier Service Management Event - Optimizing the World's Infrastructure, I thought it might be helpful to provide some details around the sessions and activities that will be available to all of you storage and information infrastructure enthusiasts out there....
Here are a few sessions that you can attend each day. Sign up for these sessions and others today (requires only an IBM.com password - you do NOT have to be a Pulse registered attendee to create a Pulse schedule online)!
Go to the on-line agenda tool to see additional Storage and Information Infrastructure sessions that may be of interest to you. There are also sessions in the Expo Theater Stream.
Register and attend Pulse to take full advantage of all that will be offered:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  appliance tivoli management software amazon system midmarket service monitor tfam saas ami pulse foundations midmkt ibm tivfdt fsm 1,624 Visits
In response to: Service Management for Midsize Businesses at PULSEIt's great to see that IBM Tivoli Storage Manager FastBack will be showcased as the back-up and recovery solution for Midsized businesses and included in the Service Management for Midsized Business Track at Pulse 2010. Also be sure to check out the Expo to see the IBM Comprehensive Data Protection Solution Express demonstration
Oren Wolf 270002KMMG firstname.lastname@example.org Tags:  backup tivoli hyper-v tsm storage-software storage backup-recovery storage-management vmware storage-blog 1,943 Visits
I don't know about you, but I have been virtualizing like crazy over the last few years, humongous servers have been turning into medium sized virtual machines, test and lab environments had turned into small files running on my laptop from a flash drive.
My IT department have been virtualizing even more, consolidating servers, sharing storage resources among multiple machines and converting NICs (Network Interface Cards) into virtual switches (I still haven't figured out how they did that).
The move into a virtualized environment is very useful for reducing energy consumption, decreasing physical server and storage foot print and driving up processor and storage utilization but it also has some side effects when it comes to data protection.
The problem begins at the same place that drove us into virtualization to begin with, resource sharing, You may now have 10 virtualized servers running on the same physical host, if your backup process consumed only 5% CPU and IO on your physical server, imagine what would happen if all 10 virtual machines kick off the backup process at the same time...
There are multiple valid approaches for providing data protection to those virtual machines and I’ll try to address each and every one of them in upcoming blogs…
Other enhancements that might not necessarily be backup related but have to be seriously considered when virtualizing include
Richard Vining 2700019R2A email@example.com Tags:  flashcopy storage-blog instant-restore recovery-management snapshot tivoli data-protection 2,588 Visits
IBM will be providing a series of live web-based demonstrations dedicated to showing you the value of IBM Tivoli Storage FlashCopy Manager. It will also show you how the product works. These will be DEMOS of live code.
Organizations seeking to improve protection for Business Critical Application Data can leverage Tivoli Storage FlashCopy Manager to simplify management through integration with IBM Storage Hardware Advanced Snapshot technology. As the first event in a series of Customer Web Conferences we will focus on demonstrating the features of Tivoli FlashCopy Manager for the Microsoft Exchange Environment.
Audience: IBM Customers and their associated IBM and Business Partner Sales representatives
Key Features of Tivoli Storage FlashCopy Manager include:
Hosts: John F. Miller, IBM North American Sales Executive
Neil Rasmussen, IBM Software Designer for Tivoli Software
Tivoli Storage FlashCopy Manager Demo Schedule
All calls will begin at 12noon ET for 1 Hour
Web Conference details:
Audio Conference Call details:
To learn more about IBM Tivoli Storage FlashCopy Manager, please visit:
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  fastback demonstration instatnt-restore large-eterprise midmarket disaster-recovery tivoli storage-blog storage-software data-protection ibmstorage backup branch-office microsoft-exchange continuous-data-protectio... recover 5,046 Visits
Live Demo! IBM Tivoli Storage Manager FastBack and IBM Tivoli Storage Manager FastBack for Exchange Scheduled Dates in 2010
Mark Your Calendars!
IBM will be presenting a series of live demonstration dedicated to showing the value of IBM Tivoli Storage Manager (TSM) FastBack and TSM FastBack for Exchange data protection products.
These additions to the TSM product family offers the ability to meet aggressive Recovery Point and Recovery Time Objectives in an organization's data protection service.
The TSM FastBack family provides many advanced features including:
Instant Restore allows users to access to their data or application immediately, while the restore is taking place.
Continuous Data Protection sends backup data continuously which allows a recovery to be done to any point in time.
Incremental Forever Backups prevents wasting time and money in performing and storing unnecessary full backups. Each backup appears to be a Full backup, but only the blocks that have been modified are copied.
FastBack Mount allows access to backed up data without it being recovered. This enables data to be validated after backups, the correct data to be identified before it is recovered, or data to be opened and its contents
to be recovered at a more granular level, thus reducing the size and time of the recovery.
Exchange Brick-level Recovery allows individual Exchange mail objects to be recovered from a previous backup without requiring an entire Exchange Database to be recovered. TSM FastBack for Exchange does not
require additional backup processing to provide IMR.
Branch Office Disaster Recovery allows replication of branch office backup data to a central site. This data can be compressed and encrypted during the transfer. The replicated data at the central site can be used
as the source for creating a tape copy of the data or for recovering branch office data and hosts. TSM FastBack allows the backups and replication of multiple branch offices to be monitored with a single tool.
TSM FastBack Bare Machine Recovery allows hosts to be quickly recovered, even to dissimilar hardware.
These demonstrations are open to Customers, Business Partners and IBM employees.
TSM FastBack Demo Schedule for 2010:
Demos will be available in English and Spanish. All English calls will be at 10:30 AM and 3:00 PM Central Time on Thursdays.
All Spanish calls will be available at 1:00 PM Central Time on Wednesdays.
There are Web Conference and Audio Conference components to this demonstration.
Conference ID is FASTBAK
Prior to the web conference, we suggest you do the following:
1) go to www.sametimeunyte.com
2) click on Support
3) click on Lotus Sametime Unyte Meeting System Check
4) Select attendee type and click Next
5) Proceed with the system check and install any plug-ins required.
English Live Demo Audio Conference:
Title: TSM Fastback LIVE Demo
Toll Free: 800-857-4143
Spanish Live Demo Audio Conference:
USA- Toll Free: 888-359-3613 Toll: 719-325-2348 T/L: 650-2012
Argentina-0800 666 2982; Australia-1 800 138 721; Austria-0800 291 390
Belgium-0 800 77 128; Brazil-0800 891 4391; Bulgaria-00 800 1100 178
Chile-123 0020 9673; China, Northern-10 800 714 1159; China, Southern-10 800 140 1141
Colombia-01 800 518 0760; Costa Rica-0800 015 0597; Czech Republic-800 900 705
Denmark-80 884 789; Finland-0 800 1 119654; France-0 800 902 956
Panama-08 600 205 3173; Peru-0 800 53 354; Philippines-1 800 1110 0845
Portugal-800 819 688; Russia-810 800 2679 1012; Singapore-800 101 1954
Slovenia-0 800 80158; Spain-900 967 691; Sweden-02 079 3083
Switzerland-0 800 563 064; Thailand-001 800 156 205 5311; Trinidad and Tobago-1 800 205 5311
United Kingdom-0800 028 9769; Uruguay-0004 019 0176; Venezuela-0 800 100 5265
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  midmarket saas foundations appliance service management ami software amazon tfam fsm system monitor midmkt tivfdt ibm tivoli 1,919 Visits
In response to: The Pulse Roadmap to Mid-Market Solutions and OpportunitiesIBM Tivoli Storage Manager FastBack is a great continuous data protection, backup and recovery solution for both midmarket and large enterprise organizations, for branch offices or data centers. For more storage sessions while at Pulse 2010 check out this blog post https://www-950.ibm.com/blogs/tivolistorage/entry/the_pulse_roadmap_to_storage_expertise?lang=en_us
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  storage-management flashcopy fcm storage-software tsm storage 1,793 Visits
The last time I blogged I was telling you about IBM Tivoli Storage FlashCopy Manager on Windows and just how cool it was. Well, I am working on some more neat stuff and I wanted to tell you about a beta program for upcoming release of FlashCopy Manager. It is called the Beta program for IBM Tivoli Storage FlashCopy Manager. If you want to test some of the new functions and features of the upcoming release of IBM Tivoli Storage FlashCopy Manager, please contact Mary Anne Filosa (email@example.com) or your IBM Sales representative to get details.
The enrollment period is ending soon, so don't wait to be a part of the action!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog data-reduction ibmstorage data-management pulse2010 dynamic-infrastructure pulse data-recovery data-availability ibm backup-recovery data-protection storage-software tivoli ibmpulse service-management virtualization 2 Comments 2,638 Visits
The count down is on... with only 2 weeks left to Pulse 2010, I wanted to give you and update on additional perks you'll have access to if you register and attend.
Meet the Experts!
Talk one-on-one with Product Experts
Visit the Expo!
Share Your Story
This year at Pulse 2010 we are scheduling video tape interviews with clients who are willing to share their thoughts on what they are doing to achieve visibility, control, and automation in their infrastructure. We will be filming client videos at Pulse starting Sunday, February 21, through Wednesday, February 24. The content will be used to produce short videos that we will leverage to tell the needs clients are addressing in their organizations. Our customers have been sharing their stories throughout 2009 as you can see below. Interested in participating? Notify me at firstname.lastname@example.org
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  space-managment archive midmkt storage-blog data-reduction backup tivoli midmarket data-management deduplication hsm mid-market 1,277 Visits
In response to: See Tivoli Storage Management - and me - in action at Pulse 2010Rich, thanks for the recap of some important sessions that will be presented at Pulse. Additional Storage and Information Infrastructure tracks can be found on the Tivoli Storage Blog.
John Foley 12000084U0 FOLEYJOH@US.IBM.COM Tags:  smart-archive storage-blog ibm information-archive ibmstorage ibm-storage 1,845 Visits
The end of last year was pretty hectic for a lot of us and you might not have attended IBM's "Information on Demand Gala" but as a refresher, we introduced our Smart Archive Strategy. Several of my customers have been asking for a refresher on the topic and we've just posted a short video describing this comprehensive approach that combines IBM software, systems and service capabilities designed to help you extract value and gain new intelligence from information by collecting, organizing, analyzing and leveraging that information. For more information, watch this video, visit the IBM Smart Archive Strategy Website and meet me at Pulse 2010 by attending the Storage Track sessions to discuss your specific archiving needs.
Richard Vining 2700019R2A email@example.com Tags:  data-management data-reduction hsm deduplication space-managment recovery archive tivoli backup storage-blog 3,162 Visits
On 19 March 2010, IBM will release Tivoli Storage Manager V6.2, the next in a long line of enhancements to the leader in enterprise-wide data protection, unified recovery management and effective data reduction. Highlighting this release is the addition of source (client-side) data deduplication, tighter integration with TSM FastBack, enhanced support for virtual server environments, automatic deployment of Windows client upgrades, and improved automation and performance of back-end data management processing.
Full details of this announcement are in the TSM 6.2 Announcement Letter
Oren Wolf 270002KMMG firstname.lastname@example.org Tags:  virtualization storage storage-software storage-blog data-protection backup-recovery data-recovery backup 2,560 Visits
on my previous blog i've discussed some of the viable approaches to data protection with virtual machines, before i dwell into the pros and cons of each approach i'd like to discuss the fundamental differences between file level and block level backup (and solicit your input :-) ).
Encapsulation is one of the basic rules for software design, simply put, it's the computer geek's equivalent of the famous "Don't ask, don't tell" policy. The idea is pretty simple, let's assume our File System is component A and our Disk System is component B. Component A and B publish a public interface that others can use, but they hide their internal mechanisms from the other components. This enables us to do some nifty tricks, such as RAID, as far as the file system is concerned it is working with a "regular disk", it is unaware of the fact that our disk system had actually taken the 100GB disk space that we defined and partitioned it into multiple strips that are actually located across 5 different disks in order to provide it (the FS) with better performance and hardware fault tolerant. There are other ways where this principle is used but you have to agree that it comes in pretty handy.
But, why do i even mention "encapsulation", and how is that relevant to File VS Block level backups?
The point i am trying to make is that the Disk level is not aware of the "file contents"and the File System is not aware of the "disk layout", this actually dictates the pros and cons of those two very different approaches to data protection.
With file level backups it's really easy to define which files you want to protect, than when the time comes, someone has to access the files and move the data they contain to some sort of data repository, in order to do that you must deal with issues such as:
- Open files
- Interdependencies between multiple files
- Identifying which (sub)files have changed
- For structured data (databases etc.), do we backup the entire file (or file group) or only the portions that have changed?
Block level backups are usually pretty straight forward, there's a mechanism that keeps track of the changed in "realtime" (this usually enables CDP, but that's a whole different story) and when the time comes the data will be moved to the data repository, but this technology has its own challenges
- Minimum granularity is usually a volume
- Hard to exclude unused file data (page file?)
- Recovering files from a block level backup
- Communicating with applications (and File System) to ensure backup consistency.
Generally speaking block level backups have a "lower overhead" than file level backup, so, if you decided to virtualize your environment and keep using agents on the individual virtual machines, you would probably want to use a block level backup solution. File Level backups are still viable (especially if they skip the "indexing" process by using an FS filter or journaling and allow for "sub file" incremental backup), but you will need to be more careful when planning your backup windows in order to prevent VM sprawl.
Stay tuned, next we'll discuss other approaches such as proxy backups
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli pulse2010 ibmstorage archive partners pulse storage-software infromation-archive ibmpulse storage-blog 1 Comment 1,946 Visits
Pulse kicked-off today with the Business Partner Summit. I attended the IBM Information Archive session where the partners attending and I learned about the Archiving Ecosystem and how IBM Infromation Archive helps: reduce costs, improve productivity / effeciency and reduce risks. Information Archive is a simplified, cloud-ready smart business system.
Some important questions to help understand whether or not an archiving solution is needed include:
The partners in the session had a lot of great comments and questions.
If you are a partner and you were unable to attend the IBM Information Archive session (or you attended but want to hear more) you can attend the other sessions that are scheduled at Pulse:
A technical look inside IBM's next generation archive appliance Tuesday 3:30-4:30pm @ RM 120
IBM's Smart Archive Strategy Simplifying Information Retention Tuesday 5:00-6:00pm @ RM 120
Birds of a Feather: IBM Smart Archive Strategy Discussion Tuesday 6:00-7:00pm @ RM 120
Next stop for me is the Pulse 2010 Business Partner Summit General Session!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm tivoli rational las-vegas management service pulse olympics 1,283 Visits
In response to: Pulse: The Olympics of Service ManagementIt's amazing that som many countries are represented at Pulse 2010. I wonder how many of them are here to learn about storage....
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmpulse pulse pulse2010 storage storage-blog information-infrastrucutr... 1,325 Visits
Pulse 2010 got off to a great start with a very successfule Business Partner Summit. There were several Storage partners that attended the Storage Breakout session. We were even able to get some of them to sign up for professional video interviews...
During the Tivoli Storage Software Strategy and How to Sell It! session Dan Galvan VP IBM Systems Storage Marketing gave an overview of the Smarter Planet initiative and Ron Riffe provided an indepth presentation on the storage software portfolio. Partners were informed of three solution plays that they can focus on for storage. There were many questions that were asked and answered.
We also provided details on how our partners can stay connected during and after Pulse with IBM Storage networks and social media. These networks are also available to our customers and our partners' customers.
Tivoli Storage Blog for getting conference updates and daily highlights from Pulse 2010. This blog is used to discuss many different topics like data reduction, virtualization, new product announcements and more..
IBM Storage Community for manageing your contacts at Pulse, sharing links and bookmarks, and providing feedbak on the conference
IBM Storage on Twitter for listening and contributing to realtime buzz with other Pulse attendees and organizing meetups. Use #ibmpulse in your tweets. You can also follow me on twitter
IBM Storage LinkedIn Group for connecting and networking with individuals interested in IBM Storage
Storage Management on YouTube for viewing and commenting on live stories with Pulse attendees and viewing other storage videos.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmpulse storage storage-blog pulse2010 pulse information-infrastructur... 1,363 Visits
Today (Monday) was an action packed and exciting day.
The day started off with the General Session where Al Zollar, the General Manager of Tivoli started off with the discussion around Smarter Planet and how the world is getting smarter - Instrumented - Interconnected - Intelligent. He gave several examples of how companies are shifting to become smarter, smarter buildings, smarter healthcare, smarter citeis etc. By becoming smarter Al explained that both Risk and Complexity can be reduced.
I enjoyed hearing about Capital Region of Denmark and how they have over 1.5 Billion bytes archived and they revolutionized their storage management so that they manage all that data with 4 staff members.
The presentation then went into Integrated Service Management for Data Centers, for Design & Delivery, and for Industries which consists of
and the importance of... Visibility, Control, and Automation!
There were also some new storage announcements made in the general session (stop by the expo to see the demo of each product):
The other speakers included Rational General Manager Danny Sabbah to dive deeper into Integrated Service Management for Design & Delivery, Laura Sanders Tivoli Vice President of Strategy & Development for an entertaining demonstration with live code showing a smarter city (accompanied by Dave Lindquist IBM Fellow, Vice President & CTO, Tivoli Software and Dr. Wing To Vice President Strategy and Product Management, Tivoli Software). After the demo the last IBM speaker was Mike Rhodin and he went more into more depth around Integrated Service Management for Industries.
The guest speaker to wrap up the General Session was former Vice President of the United States, Al Gore.
After the General Session, the rest of my day is a blur. It was filled with attending the Storage & Information Infrastructure track kick-off, meetings with customers and business partners to do impromptu video interviews/podcasts, tweeting, reporting storage highlights for the Pulse Points daily newsletter, checking out the expo and the demos and scheduling more video interviews. I had to have walked at least 6 miles today with making trips to and from the conferenece center several times. I was a little bummbed that I wasn't able to attend as many of the customer case sessions in the storage track, I'll have to make up for that tomorrow.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage ibmstorage tsm storage-software partner storage-cloud pulse logicalis ibmpulse pulse2010 storage-blog tivoli 1,855 Visits
Today I did several live video interviews. Let me be honest with you, it is clear that I wasn't meant to be in the journalism profession, uhm, now that is the truth!
I met many IBM clients and business partners through out this week at Pulse and today I did an interview with Roger Finney from Logicalis which is an IBM Business Partner. We did this interview right outside the expo hall at the MGM Grand hotel, so you can hear the airplanes going over from McCarran International Airport.
Logicalis has been an IBM Business Partner for over 14 years and they are both Software Value Plus authorized and Tivoli Accredited. In this video, I ask Roger to provide some details about how Logicalis has helped their customers with their storage management needs.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog pulse oakwood virtualization storage-software ibmpulse pulse2010 1 Comment 1,845 Visits
I had the pleasure of interviewing one of our client speakers, Brian Perlstein from Oakwood Healthcare System. Brian will be presenting the Oakwood Healthcare System's Virtualization story on Wednesday Feb. 24th at 9:30 to 10:30 am in the Conference Center room 121. Hope to see you there!
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  vcuhs ibmpule pulse2010 pulse storage-software backup-recovery tivoli ibmstorage storage-blog 2 Comments 2,269 Visits
Yesterday I interviewed Greg Johnson, CTO and Director of Technology and Engineering Services for Virginia Commonwealth University Health System (VCUHS). Greg presented at Pulseon Tuesday and he discussed how VCUHS is transforming IT in a healthcare environment focussing on their storage solutions and backup and recovery solutions. If you weren't able to attend Greg's session on Tuesday at 2:00 - 3:00 pm in the Conference Center room 120, watch the video below and you'll see a high-level recap of what he presented.
Once again, this was a live interview from outside the expo hall in the MGM and the McCarran International Airport, sure is one of the busiest airports in the world... maybe I should have done my interviews inside the conference. I enjoyed the fresh air and the airplanes in the background just adds to the beauty of the live interview. I still think that Journalism is a field that I will not be pursuing... hopefully my interview skills will improve before Pulse 2011, which will be Feb. 27 - Mar. 3 2011.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  pulse2010 ibmpulse pulse ibmstorage storage storage-management 1,545 Visits
It's been almost 2 weeks since Pulse 2010 in Las Vegas and I'm still playing catch up. I finally finished loading all the pictures I took while at Pulse and last week I finished uploading all my YouTube videos. Check out the IBM Pulse Conference Flickr Group for all the Pulse 2010 photos - the ones I loaded are from tiffwdms. Checkout the Pulse 2010 YouTube videos and for all the Storage YouTube videos you can go directly to the Storage Management Playlist.
If you were unable to attend Pulse 2010 in Las Vegas you can attend the virtual event on March 16, 2010. Register here.
I want to share with you a few of the expert videos that I captured while at Pulse.
Kathy Mitton - Tivoli Storage Expert
Jason Perkins - Tivoli Storage Expert
Rajendran Subramaniam 060000D5Y9 email@example.com Tags:  pulse2010 storage-pulse2010 storage-blog ibmstorage pulse ibmpulse storage xiv 1,360 Visits
In response to: Viral Friday - Video - Pulse 2010 - XIV Demo in the ExpoNice Video.
I added it to the Storage YouTube channel playlist at http://www.youtube.com/view_play_list?p=40B9D25D29105511
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog tivoli pulse service-management ibmpulse pulse2010 storage 1,394 Visits
While I was at Pulse 2010 in Las Vegas, I had the pleasure of meeting and interview Nils Lau Fredriksen, CIO for the Region of Southern Denmark. Nils was one of the five CIOs that participated in the CIO panel during the Day 2 General Session. It was very interesting to hear his experience with implementing integrated service management along with the other CIOs that were on the panel.
Nils went into more depth during his presentation, on Wednesday Feb. 24th, regarding his experience of implementing integrated service management (or what he calls quality management) at The Region of Southern Demark. I attended the session and there were many questions from the audience.
I met up with Nils after his presentation to get a quick interview, which you can watch below...
or in Danish:
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-reduction retention deduplication data-protection service-management unified-recovery-manageme... risk-management storage-blog backup recovery compliance tivoli archive business-continuity restore disaster-recovery 1 Comment 3,451 Visits
Unified Recovery Management for a Smarter Planet
Welcome to my new blog series which will focus on simplifying the lives of storage and backup administrators. In this first installment, of course, I’ll start laying out the problem as I see it. Hopefully, you’ve seen all the many IBM Smarter Planet commercials on TV over the past year. The basic story is that the planet is smaller and flatter than it used to be, and is more connected economically, socially and technically. I don’t think anyone would argue that information technology has dramatically changed the way people, businesses and governments interact across the planet.
This is because everything is becoming more instrumented, interconnected and intelligent. Cars are talking to sensors embedded in roads, mobile equipment is tracked via GPS, machines of all types are predicting the need for maintenance and calling home to schedule a service call, groceries are talking to store shelves, and intelligent power meters are reducing the waste in transmission systems.
As former U.S. Vice President Al Gore said in his keynote at Pulse2010 last month, “Traffic on the Internet is now dominated by things communicating with things, rather than people communicating with people”.
The result of all this interconnectivity is the creation of enormous amounts of digital information – data. This data is being used for incredible things: finding cures for many diseases, finding oil and gas in new places, dynamically reducing traffic bottlenecks, preventing crime, and improving the delivery of health care – all while reducing costs. But what the commercials fail to mention is that someone has to manage all this data – it has to be stored, protected, available and reliable. The traditional response to data growth has been to throw more capacity at the problem, but this is no longer the ‘green’ thing to do. While the costs of storage capacity continue to decrease, the costs of housing, cooling and managing storage now consume the majority of most IT budgets. We need to find smarter ways to manage more data, ways that require less infrastructure, less power, less people, and yes, less money.
The environments that all of this data reside in are becoming incredibly complex, leading to an unmanageable patchwork of tools and processes that storage administrators have to use in order to attempt to meet the service level needs of their organizations. For example, the numbers of different hardware platforms, operating systems and applications are expanding (and of course each new application is more important than the last), while the places where data is being created and stored are multiplying. And way too many different things can go wrong, each demanding a different type of response.
I’ll be diving deeper into this complexity in the next installment, and later in the series will describe what IBM is doing to help simplify your life. Get ready for the Smarter Planet, and for managing all the data that it’s creating!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Rajendran Subramaniam 060000D5Y9 email@example.com Tags:  solid-state-storage ibmstorage pulse pulse2010 storage ibmpulse storage-pulse2010 demo 1,175 Visits
In response to: Viral Friday - Pulse 2010 - Solid-state Storage DemonstrationThanks for this nice video
Rajendran Subramaniam 060000D5Y9 firstname.lastname@example.org Tags:  storage disk ibmpulse pulse nas nseries ibm storage-blog netapp ibmstorage pulse2010 1,396 Visits
In response to: Viral Friday - Pulse 2010 - N series DemonstrationThanks Tiffeni for the nice video.
Richard Vining 2700019R2A email@example.com Tags:  archive data-reduction restore risk-management tivoli service-management deduplication compliance storage-blog unified-recovery-manageme... recovery business-continuity backup data-protection disaster-recovery retention 2,239 Visits
In chapter 1, I described how the planet is becoming ‘smarter’ and that this transformation is creating enormous amounts of new data that needs to be effectively managed. In this chapter, I’ll review some of the things that complicate the effort to ensure all this data is properly retained, protected and available when needed.
Ideally, you would like to have a single tool that does everything, across the entire enterprise, providing the ability to effectively respond following any type of event. While many vendors promise to solve your problems, nobody can provide this capability in a single package – the problem is just way too complex. But (tease), IBM is driving toward a unified recovery management capability that enables you to manage a selection of tools from a single administrative interface. More on this next week; first we need to ensure that we appreciate the complexity.
The first category is infrastructure – where is the data?
Your IT shop probably includes several if not many types of hardware: computer platforms such as x86, Power, RISC, Sparc, mainframes, etc. And there are a wide array of storage platforms, including direct-attached (DAS), network-attached (NAS), tape, and I’m sure many of you still have optical disks somewhere. And from many different vendors!
On these platforms, you’re going to have different operating systems: AIX, HP-UX, Linux, Solaris, VMware, Windows, z/OS. Then they’re going to be physically located in different places – data centers, staff offices, production facilities, remote/branch offices, disaster recovery sites, and warehouses.
Different types of networks, and the available bandwidth on them, further complicates the system. You have local-area (LAN), wide-area (WAN), storage-area (SAN) and metro-area (MAN) networks; additionally you may have cable networks running to some offices (particularly home offices), telecommunications networks that now carry data, and USB connections to some storage devices. And finally, you likely have important data being created and stored on user workstations.
How many tools do you use just to cover this level of complexity? But wait – there’s more! The next question is: who owns the data?
We also have to matrix in the different type of applications you have – general file systems; email, instant messaging and collaboration systems; databases such as B2, Oracle, SAP, SWL and mySQL; and your industry-specific mission-critical applications such as CAD/CAM, medical records management, software development, manufacturing resource planning (MRP), or customer relationship management (CRM).
Now consider that the data created and used by any of these applications may be on any hardware platform and operating system, in multiple locations, using a variety of networks. A lot of the data may be on user workstations as well. Oh my!
But there’s still more – what can go wrong?
As I noted in my last blog, lots of things can go wrong, any you really need a different response for each of them:
OK, now draw a line from every block on the diagram above to every other block and tell me what your backup and recovery plan is for every line – even in this simple diagram, there are 100 different scenarios, but when you consider all the variables, it may be millions. What tools would you use, who will use them, how long will it take to recover, and how much data will be lost? And what does it cost?
More on that next time!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
In response to: Storage Consolidation with SONAS and TSMRich,
I love how you used your winter/summer clothing exercise as an example of consolidating storage and the pros and cons of having a large storage container vs. multiple smaller containers. I also put my off season clothes in containers and should have started moving in the spring/summer attire and move out the winter... maybe next weekend.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm ibmpulse tivoli-storage-manager storage-blog storage tsm bare-machine-recovery recovery ibmstorage pulse2010 pulse 1,849 Visits
While I was at Pulse 2010 in Las Vegas, I had the opportunity of Interviewing Scott Sterry from Cristie Software Limited. Cristie Bare Machine Recovery integrates with IBM Tivoli Storage Manager to provide a Bare Machine Recovery (BMR) solution for Windows®, Linux, SUN Solaris and HP-UX.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  virtualization tivoli resource-mgmt storage 1,080 Visits
In response to: Managing the tidal wave of data with IBM TivoliThanks for posting the white paper. For more infromation about Tivoli Storage visit the blog at http://www.ibm.com/blogs/tivolistorage