Are you going to IBM Pulse 2012
, the premier Cloud and IT Optimization event of the year? It’s at the MGM Grand in Las Vegas from March 3 – 7, and we have an awesome agenda with some first class speakers and entertainment.
But this blog is about our storage management software
ecosystem partners that will also be attending and lending their support. If you will be at Pulse, please plan to visit with these companies while enjoying the refreshments offered in the Expo Center:
- Butterfly Software offers an automated backup and storage assessment tool that can help
you identify problems in your environment and the costs you will likely
incur over the next 3 years; and shows what a new solution based on IBM
technologies will look like and cost. It’s all based on empirical data
that it gleans directly from your systems. Once you’re convinced to move
to IBM, Butterfly also offers non-disruptive migration services. If you
want to learn how much a smarter backup solution will save your
business, please stop by Pedestal # 32 in the IBM SmartCloud Zone, and
learn more at breakout sessions 1387 (5:00 Monday in room 117), and 1384
(3:30 Tuesday in room 117).
- Cristie Software is our partner for Bare Machine Recovery solutions for Tivoli Storage Manager (TSM). Hundreds, if not thousands of our customers have deployed these solutions to help restore critical servers quickly. Learn about CBMR and TBMR at booth E-418 and breakout session 1035 (2:00 Tuesday room 117).
- Front-Safe A/S provides a Tivoli Storage Manager front-end cloud portal that enables
business partners to offer “backup as a service” to their customers. If
you’re interested in moving to a services model for your backup
environment, please visit pedestal # 45 in the IBM SmartCloud Zone and
breakout session 1360 (3:30 Wednesday, room 115).
- Riverbed has a long and broad relationship with IBM. If you want to learn how to extend your TSM environment to the cloud, securely and cost effectively, please stop by booth E-105 and ask for a demonstration of Whitewater. Instead of buying more tapes for your backup archive data, Whitewater helps you move that data to the cloud, where you pay only for the storage you use. Riverbed Steelhead appliances are also used by many IBM customers to speed the movement of data between locations using proven WAN acceleration technologies.
- SEPATON was an early entrant into the Virtual Tape Library (VTL) market, and
provides a cost-effective but high performance alternative to magnetic
tape. Visit them at booth E-107. (I might get fired if I didn’t mention that IBM also has an excellent VTL solution, ProtecTIER, that you can learn about at Pedestal # 29 in the IBM SmartCloud Zone).
- STORServer offers Tivoli Storage Manager in an integrated appliance, with a simplified, easy to use interface. It’s a great backup and recovery solution for mid-sized organizations and remote offices. Learn more at booth E-408 and at breakout session 1878 (3:30 Monday room 117).
And of course, I’ll be there as well. You can catch me around the storage pedestals in the Expo Center, and at breakout session 2136 (5:00 Tuesday, room 117). I hope to meet many of you there.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
On January 25, 2012, VMware officially unveiled its VMware Solution Exchange
(VSX), an online virtualization and cloud marketplace designed to help
customers, partners and developers locate and purchase VMware-certified
products. The VSX showcases solutions
from VMware's technology alliance partners, such as IBM. Included in VSX on day one were IBM Tivoli
Storage Manager for Virtual Environments, v6.3 and IBM Tivoli
Storage Productivity Center Standard Edition, v4.2.2.
IBM TSM for Virtual Environments is an excellent data protection and recovery
solution for VMware environments because it:
- Simplifies management of
the backup and restore process for virtual machines
- Utilizes VMware’s vStorage
APIs for Data Protection, including block-level incremental backups based
on VMware’s Changed Block Tracking
- Offloads the backup
workload from virtual machines and production VMware ESX hosts to vStorage
IBM Tivoli Storage Productivity Center Standard Edition (TPC SE) delivers
advanced management for virtual server and storage environments. It provides significant benefits such as:
- Discovery of VMware ESX
servers, VMware guest operating system images, storage and which VM images
have storage allocated and from where
- Topology and Visualization. Hypervisor views within TPC SE include
drill down capability to show all VM images, end to end correlation of SAN
storage to ESX servers and VM guests
- Monitoring and Reporting
for ESX servers and VM guests, including health status and monitoring,
asset reporting, and capacity utilization
- Problem determination and
root cause analysis of storage problems, which help discover the 'real'
problem in a virtual world
- Storage provisioning from
any storage array to any ESX server.
If you want to learn more about these products while you are
2012 be sure to visit the Solution Expo and visit our demonstration pedestals:
- Tivoli Storage Analytics – to learn more about the advanced
capabilities of TPC SE
- Data Protection for your Virtual Server Environments – to
learn more about TSM for Virtual Environments
Finally, if you are already a fan of TSM for VE and/or TPC
SE then log into the VMware
Solution Exchange and write a product review. Also don’t forget to give the products a 5
Are you going to the IBM PULSE conference (ibm.com/pulse
)? I am, and I am hosting a panel discussion on the need to modernize backup and restore capabilities.
Scheduled to join me on the panel are:
- Randy Olinger, Director of Enterprise Storage Systems, UnitedHealth Group
- Gerardo Colon, Storage Administrator, Adventist Health System
- Peter M. Nielsen, CEO and Founder, Front-Safe S/A
The premise of the panel discussion will be that backup and restore just aren't as easy as they used to be, given the increasing complexity and distribution of IT, the growth of data to unsustainable levels, the pressure to improve service levels by reducing and eliminating downtime, and the need to cut spending. Our panel of experts will share how their organizations are dealing with these and other challenges, and I'm guessing that we'll cover technology solutions such as data deduplication and compression, snapshots and CDP, replication, simplified and unified administration, archiving and data lifecycle management, and how to do all these things while driving down costs.
But that's part of the fun of a panel discussion -- you never really know what you're going to get. It's scheduled for Tuesday afternoon, March 6th at 5:00PM Las Vegas time, in room 117. The session number is 2136. I hope you can make it
Oh - and have you heard - Maroon 5 and iLuminate will be entertaining us during the event; you have to go!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Backed up by IBM
Tivoli Storage Manager (TSM) is a unique Ready for IBM Tivoli program
classification for clients, business partners and managed service providers who
use the IBM Tivoli Storage Manager family of offerings as a core component of a
data protection and recovery managed services or cloud-based offering. It is IBM’s new partner program for validated
TSM cloud solutions.
Achieving Ready for IBM Tivoli software validation shows
customers that your solution meets or exceeds IBM compatibility criteria and
successfully integrates with the IBM Tivoli Storage Manager family of
offerings. Backed up by IBM TSM validation further demonstrates your offering
as being an integral part of a TSM cloud or managed service solution.
Want to learn more? Then be sure
to stop by one of the following venues while you are at Pulse
2012 for more details about this new program and how you can participate:
Partner Summit – Sunday, March 4 - Information on the program will be
included in all the breakout sessions
Partner Café - Visit the Ready for Tivoli / Backed up by IBM TSM table
Solution Expo –
Visit our demonstration pedestal, Optimizing Cloud Based Data Protection
Services with Tivoli Storage Manager
Can’t wait until Pulse and want to learn more now? Then contact John Connor, on the IBM Tivoli
Storage Manager product management team at email@example.com.
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs
nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data
created and used has been in the form of digital data. Today, the world
produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable
than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This
presents a real concern of a “Digital Dark Age” where digital storage
techniques and formats created today may not be viable in the future as the
technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A
storage tool that was so ubiquitous people still click on this enduring icon to
“save” their digital work and any word, presentation or spreadsheet
documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly
denser than they are today. While new form factors such as solid state disks
will help us provide more stable longer-term preservation of data, and the
promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard
drives and solid-state memory to overcome challenges of growing memory demand
and shrinking devices. Called Racetrack memory, this breakthrough could lead to
a new type of data-centric computing that allows massive amounts of stored
information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical
limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide
structure in midst of the data deluge
Now that we have the capability to preserve our digital
universe, we need to find a way to make it useful. We need to take the next
step past data preservation to data curation. Data curation is the active and ongoing management of data
through its lifecycle. This smarter data categorization adds value to data that
will help glean new opportunities, improve the sharing of information and
preserve data for later re-use. Social media is a great example of the power of curated
data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives
and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting,
appraising and organizing data to make them accessible and interpretable. The
key is bringing data sets together, organizing them and linking them to related
documents and tools. If data can be stored in a way that provides context,
organizations can find new and useful ways to use that data.
3) Storage analytics will open
new business insights
With data curation allowing organizations the platform to
better utilize their data, analytics will help turn that data into intelligence
and, ultimately, knowledge. With the information that historical trending analytics
and infrastructure analytics provides, you can index and search in a more
intelligent way than ever before. By doing analytics on stored data, in backup
and archive, you can draw business insight from that data, no matter where it
exists. The application of IBM Watson technology for healthcare
provides a good example. Watson collects data from many sources and is able to
analyze the meaning and context. By processing vast amounts of information and
using analytics, it can suggest options targeted to a patient's circumstances,
can assist decision makers, such as physicians and nurses, in identifying the
most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we
can learn more with the information we have today to improve service to
customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity
– new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain
industries are able to reach new levels of innovation by having the capacity to
house, organize and instantaneously access information. For example, Hollywood is known for its big budget
blockbusters, but it’s the big storage demands required by new formats such as
digital, CGI, 3D and high definition that’s impacting not just the bottom line,
but studios’ ability to produce these types of movies. Data sets for movies
have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs
as just one day’s worth of filming can generate hundreds of terabytes of data.
The popularity of these high data-generating formats means studios are looking
for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger
data dilemma than the entertainment business. Take a look at the Institute
University of Leipzig, in Germany, which has a major genetic study called LIFE
to examine disease in populations. LIFE is cataloging genetic profiles of
several thousand patients to pinpoint gene mutations and specific proteins.
This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of
data per year. Those figures will only grow with higher-resolution medical
imaging, and new tools or services such as making electronic healthcare records
5) Intervention...The Data
In this era of Big Data, more is always better, right? Not
so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too
much time and money collecting useless or bad data, potentially leading to
misguided business decisions. This practice can be changed with simple policy
decisions and implementing existing capabilities in technologies that exist in
smarter storage, but companies are hesitant to delete any data (and many times
duplicate data) due to the fear of needing specific data down the line for
business analytics or compliance purposes. Part of the solution starts with eliminating the copies.
Nearly 75% of the data that exists today is a copy (IDC). By deleting and
disabling redundant information, organizations are investing in data quality
and availability for content that matters to the business. Consider the effect
of unneeded data, costing money by replicating throughout an organization’s information
systems. This outdated data can also potentially be accessed for fraud.
the quality of data is not costly—not getting it right is.
ARE YOU SPEAKING AT PULSE?
IF SO, READ ON PLEASE...and book your room at the MGM Grand today to avoid a price increase!
1. Have you uploaded your presentation?
The deadline to upload presentations was January 20th to enable appropriate reviews and posting to the Pulse 2012 SmartSite Agenda Builder
. Your presentation will be converted to PDF and can be downloaded or printed in advance by attendees, pending your approval. For a full list of presentation guidelines and processes please review the Presentation tab on the online Speaker Kit.
2. Do you know what audio visual equipment will be available in your session room?
Click the A/V tab in your online Speaker Kit
to review this important information.
3. Are you connected?
Follow the conference news & highlights on Twitter or the Pulse blog. Click the Speaker Kit tab to find links and hashtags for use with social media. Find Pulse attendees using the Pulse SmartSite agenda builder.
4. Attendees are always interested in getting to know their speaker! Do you have a bio?
Review and update your brief bio by logging onto the Speaker Kit
5. Have you started to build your Pulse conference agenda on SmartSite, the attendee conference portal?
You will need your conference registration confirmation number to login to this site. Click the Build My Agenda icon to view scheduled sessions.
6. Have you registered for the conference and booked your hotel?
Review the registration instructions listed in the registration tab on the speaker kit website.
Very important... Conference hotel accommodations are limited and available on a first-come, first served-basis. Conference rates are valid until January 27, 2012
or until the room block is sold out, whichever comes first.
Please take a few minutes to review the information in your online Speaker Kit, and follow-up on all speaker actions as needed.
If you have any questions or need additional information, please contact the speaker support at PulseSpeaker@experient-inc.com
. We look forward to seeing you at the MGM Grand in Las Vegas March 4-7!
IBM has detailed innovative projects and research that show new
storage approaches to support Big Data growth and drive business innovation.
Healthcare, financial services, media and entertainment, and
scientific research among many industries face the challenge of storing and
managing the proliferation of data to extract critical business value. As
storage needs rise dramatically, storage budgets lag, requiring new innovation
and approaches around storing, managing, and protecting Big Data, cloud data,
virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute
for Computational Cosmology (ICC) at Durham University in the U.K. and Business
Partner OCF to build a storage system to better store and manipulate Big Data
for its cosmology research on galaxies. ICC is adopting the same IBM General
Parallel File System technology used in the
IBM Watson system to store and manage more than one petabyte of data from two
significant projects on galaxy formation and the fate of gas outside of
galaxies. The enhanced storage system will enable up to 50 researchers, working
collaboratively to access and review data simultaneously. It will also help ICC
learn to manage data better, storing only essential data and storing it in the
New Storage Platform Delivers More Personalized, Visual
Healthcare: A medical archiving
solution from IBM Business Partners Avnet Technology Solutions and TeraMedica,
Inc. powered by IBM systems, storage and software gives patients and caregivers
instant access to critical medical data at the point-of-care. Developed in
collaboration with IBM, the medical information management offering can manage
up to 10 million medical images, helping health care practitioners provide
better patient care with greater efficiency and at reduced costs. The
integrated platform allows users to manage and view clinical images originating
from different treatments and providers to bring secure, consistent image
management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a
general medical and surgical hospital in Visalia, Calif., needed to reduce its
operational costs while increasing storage space. To meet these demands, KDHCD
tapped IBM's storage systems to create a new storage platform that reallocates
resources and saves a significant amount of data space with thin-provisioning
technology. Virtualization creates a smaller hardware footprint so the hospital
also saved on power and cooling costs. KDHCD now has a consolidated storage
environment that provides the scalability, ease-of-management, and security to
support critical healthcare data management for the hospital.
Often data center managers find it difficult to accommodate data growth, while maintaining high levels of storage service and availability. In addition to these challenges, new IT initiatives such as virtualization and cloud services introduce additional complexity to already stressed out administrative staff.
IBM's Integrated Service Management solutions can help organizations realize the full potential of their business by providing a holistic approach to delivering and managing IT services. Specifically, IBM Tivoli Storage Productivity Center is designed to equip today’s IT organizations with critical capabilities for visibility, control and automation in the storage
Download and read the latest white paper "Gain visibility, control and automation in your storage environment
Survey of IT Decision Makers Sheds Light on Need for a New Class of Storage
Late last year, IBM issued survey results that shed light on the storage spending priorities and organizational needs for the near future. Conducted by Zogby International on behalf of IBM, the survey of 255 IT professionals in decision-making positions showed that the majority of respondents (57 percent) agree their organization needs to develop a new storage approach to manage future growth.
The survey underscores the need for a new class of storage that can expand the market for solid-state drives (SSDs) by combining their ability to speed the delivery of data with lower costs and other benefits. Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it. Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. However, the major factor for not using SSD was cost, according to 71 percent of respondents.
To address this issue, IBM Research has been investigating a potential in solid-state breakthrough called “Racetrack memory” that could someday access data significantly faster than hard-disk drives—at the same low cost—and be a successor to flash in handheld devices.
The survey also found that:
· Nearly half (43 percent) say they are concerned about managing Big Data.
· Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent).
· More than a third (38 percent) said their organization’s storage needs are growing primarily to drive business value from data. Adhering to government compliance and regulations that require organizations to store more data for longer -- sometimes up to a decade -- was also a leading factor (29 percent).
· About a third of all respondents (32 percent) say they either plan to switch to more cloud storage in the future or currently use cloud storage.
Organizations are faced with an increasing challenge of storing, analyzing, and protecting ever-expanding data sets that hold significant business value, driving the need for radical new approaches to storage fueled by innovation. Cloud computing, analytics and more advanced storage management technologies will be critical to tapping into that data and turning it into intelligence.
Focused on developing disruptive innovation and pushing the boundaries of data exploration and utilization, IBM Research drives new approaches to managing data, including storage for cloud systems that are geographically dispersed, adding autonomic behavior to storage systems, creating archival systems that prevent a “digital dark age,” and optimizing storage for analytics.
In the last year, IBM Research has recorded a number of storage technology breakthroughs including a 29-gigabit per-square-inch tape demonstration; a world record of scanning 10 billion files in 43 minutes; and, more recently, the creation of a 120-petabyte data system that is roughly 30 times larger than the biggest single data repository on record.
In response to: Enabling TSM Unified Recovery Management Replication
Want to learn more about how HyperIP can help accelerate your data
transfer by as much as 12x? Join NetEx and IBM Tivoli Storage
Software for a webinar on Jan. 25, 1PM EST to hear all about how
pairing Tivoli Storage Manager 6.3 with NetEx HyperIP can help you
achieve this! Register here: http://bit.ly/xQFHdm
In the IBM Thought Leadership Whitepaper, 10 Ways to Save Money with IBM TSM, “IBM Tivoli Storage Manager Suite for Unified Recovery simplifies and streamlines storage management, helping organizations control both the risks and costs of data protection and recovery.” This blog post visits the savings NetEx’s HyperIP offers by running TSM Replication, a feature of Tivoli Storage Manager Extended Edition and Tivoli Storage Manager Suite for Unified Recovery, over the WAN.
Previous blog posts talk about the performance improvement of TSM replication over HyperIP (http://www.netex.com/blog/?p=206). The following chart describes the true performance of replication over HyperIP (data provided by NetEx):
HyperIP enables TSM replication to see near wire speed, over any distance, even over lossy WANs. With HyperIP’s block level compression, throughputs can literally exceed wire speed by as much as 6x; with lossy WANs, over 12x. This means a replication window that moves GB’s of data can be reduced from hours to minutes, without having to increase the bandwidth of the WAN links between remote TSM server nodes. Bandwidth savings alone can return the HyperIP investment in less than 3 months.
For more information, visit http://www.hyperip.com or contact your IBM Business Partner for more information on Tivoli Storage Manager replication over HyperIP. Stay tuned for upcoming co-sponsored webinars with the IBM Tivoli team and NetEx. NetEx is a proud exhibitor at Pulse 2012.
Author: Steve Thompson, NetEx (firstname.lastname@example.org)
In October 2011, IBM added native replication of backup data in Tivoli Storage Manager Extended Edition
v6.3 to help customers add "warm standby" disaster recovery capabilities to their unified recovery
management platform. This is a powerful new feature that can help reduce the costs of maintaining a separate DR point solution, and simplifies the overall management of the environment.
However, when moving data between physical locations, especially over the long distances desired for a true disaster recovery solution, network latency can become a significant issue. TSM replication is extremely efficient, in that it sends only incremental, deduplicated data between sites. But transfer times can still be impacted by network latency over long distances.
To overcome this problem and provide near native transmission speeds, WAN acceleration solutions such as Netex HyperIP can be deployed.
Netex recently completed testing of their solution with the new TSM replication feature and found that it can accelerate data transfer by as much as 6 times, or 12 times with HyperIP’s block-level compression. To learn more, please visit http://www.netex.com/blog/?p=206
I know. I’m sinking pretty low when I borrow a line from an animated gecko. But as I keep thinking that data backup and restore systems are very much like automobile insurance, I just can’t resist.
Think about it – what value do you get from paying for auto insurance, other than the peace of mind that should some fool run into you, you’ll be able to get back on the road in a reasonable amount of time and at a reasonable expense? The same is true with data backup: on its own, it offers little value while costing a lot of time and money, but you had better have one when something / anything goes wrong.
As with your auto insurance, you want to pay as little for backup/restore as possible, while meeting your service level objectives. There are choices to be made that impact your costs and your recovery capabilities – does your policy include towing, collision repair, or the use of a rental car while yours in the shop? And what is the out-of-pocket deductible you have to pay per accident?
Same thing with backup – which data do you protect, how often do you perform backup, how many versions and copies do you keep, how long do you keep them, where do you distribute them, how fast do you need to restore? All of these service level considerations can impact your costs.
At IBM, we recognize that on the one hand, your business requires the most advanced, reliable and scalable data protection solutions for your applications and data; and on the other hand, the investments in these solutions are nothing more than insurance – they don’t contribute to the top line, and they only contribute to the bottom line when they are called upon to recover operations following a data loss disaster.
We are helping our customers meet these conflicting challenges through an evolution of continuous improvements to our data protection and recovery software, led by Tivoli Storage Manager, that can dramatically improve your business continuity service levels while reducing your costs even more dramatically.
To learn how you may be able cut the costs of your backup environment by 50% or more, please invest 15 minutes reading our new whitepaper, Ten Ways to Save Money with Tivoli Storage Manager
."The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
IBM is looking for customers and business partners who are interested in participating in an Early Access Program (EAP)/Beta Program for an upcoming release of FlashCopy Manager, Data Protection for SQL, and Data Protection for Exchange. If you would like to nominate your organization to participate in this EAP/Beta, please send an email to:
Mary Anne Filosa (email@example.com)
and be sure to include your organization's name. Once your email is received you will be sent instructions on signing off on the EAP/Beta legal form online and when that signoff has been completed, you will be sent a link to the program's nomination site. We encourage you to respond quickly if you are interested as the program begins in mid December.
Live Webcast: Using Tivoli Storage Productivity Center to be the "eyes" into your SAN environment, and to see how that environment is changing. LIVE!
In the ever changing SAN environment, Tivoli Storage Productivity Center has many components to help the Storage Administrator know when a where to focus their attention. We will walk through many of these in a live demo and see how they can be used.
Let TPC help you keep up with storage growth instead of working longer hours!
Speaker: Scott McPeek,
IBM Program Director, Storage Sales Enablement. He has worked in the software industry more than 30 years, the last ten years have been with IBM as part of the TrelliSoft SRM acquisition. Scott now focuses on storage resource management, storage performance management and virtualization with products like TPC, SVC and the Storwize V7000.
Register for this Live Webcast here
How are you spending your time this weekend? Polishing up your Pulse 2012 storage session abstract, hopefully!
With only 4 days left to submit a 100-word abstract
by Nov. 7, we thought it would be helpful to share some final pointers. Keep in mind that this year's theme
is Business Without Limits and we are seeking to understand how you
gained visibility, control and automation to deliver better business
What are the key benefits to you as a Speaker?
One full Pulse conference pass ($1995 value) and the opportunity to gain visibility for your company, and take advantage of an incredible networking opportunity with over 7,000 industry experts, press, and analysts.Here's some pointers on how to get your Storage Management session abstract accepted:
1. Focus it on topics such as how you used Tivoli Storage Manager to manage "big data"; success with recent upgrades; or cloud storage
2. Tell us about the key business challenges you were trying to solve, and how IBM Tivoli storage solutions helped you address these challenges
3. What was the ROI, or key results, from implementing a Tivoli storage solution, and what valuable lessons did you learn from the experience
Don't forget to register during early bird registration by December 16
if you do not plan to speak at Pulse and attend the conference
complimentary. Early Bird registration can save you up to $700 off
registering onsite! See you at Pulse 2012!
Well it's that time again, hard to believe, I know...PULSE call for papers has opened
, and we want to have another banner year in the Tivoli Storage Sessions! Last year we were standing room only in many of our sessions and this year we hope to fill each room once again.
As for topic suggestions, we'd like to hear from customers who:
- Recently upgraded
- Use TSM to manage 'big data'
- Have best practices, created with our Tivoli Storage portfolio that they want to share
It's simple, just go to this link and submit a 100-word abstract.
The deadline is November 7th,
so there's no time like the present!Speaker Benefits include:
Your abstract should include:
- One full conference pass ($1995 value). Only one speaker per company, per session qualifies.
- Use of our exclusive Client Speaker VIP Lounge
- Networking opportunities with over 7,000 industry experts, press, and analysts
- Your company’s name in the Pulse Pocket Agenda and a description of your presentation and speaker details on the Pulse SmartSite
- Initial business challenges and objectives
- Statistics about your deployment layout and company
- The IBM solution/products applied by your organization
- How the IBM solution/products help address the pain points
- Lessons learned from the experience
In my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray
– I briefly touched upon ‘storage tiering reports’. Now these reports are available as part of Tivoli Storage Productivity Center v4.2.2 announcement this week. In one of the latest Storage Wave studies by The InfoPro, it points out to ‘Tiered Storage Build Out’ as one of the top 3 initiatives among storage managers. Yet in a complex, virtualized environment, having complete visibility and control over storage tiering can be challenging.
Tivoli Storage Productivity Center provides capabilities for reporting on storage tiering activity to support data placement and to optimize resource utilization in a virtualized environment. The storage tiering reports leverage the estimated capability and actual performance data for IBM SAN Volume Controller
and IBM Storwize V7000
, and offers storage administrators with key insights such as:
• Are the backend subsystems optimally utilized
• Does moving a certain workload to low cost storage impact service levels
• How to level out performance in a certain pool
• Which data groups can be moved to an alternate tier of storage
Image: Sample tiering distribution report
By having a comprehensive view of performance stress on the hardware, storage tiering reports enable administrators to make proactive decisions about volume placement, thus averting any downtime or impact on the data availability.
Tivoli Storage Productivity Center enables storage administrators to optimize disk configurations, such as by progressively and dynamically changing storage tier percentage distributions between high-end, mid-range and low-end storage. For example, an initial 70/30/0 split can be changed to a new distribution of 30/50/20, enabling the organization to realize the corresponding storage infrastructure savings.
To read more about Tivoli Storage Productivity Center, click here
What’s new in Tivoli Storage Productivity Center?
IBM announces Tivoli Storage Productivity Center Select
- a comprehensive storage management software that offers advanced provisioning, performance management, capacity optimization and reporting capabilities. Select includes all key capabilities of Basic Edition, Disk and Data modules of Tivoli Storage Productivity Center family, and is conveniently packaged for ‘per enclosure’ licensing.
Select complements Tivoli Storage Productivity Center for Disk Select
(formerly Disk Midrange Edition) and is ideal for management of IBM XIV
, Storwize V7000, DS3000, DS4000, DS5000 as stand-alone devices or when attached to an SAN Volume Controller. Select also supports any device that is attached to Storwize V7000.
Learn more about Select
. Download Select data sheet
TSM FastBack for Workstations is a centrally-managed solution that reduces the risks of losing important information stored on thousands of personal computers across an entire enterprise, as described here: http://www-01.ibm.com/software/tivoli/products/storage-mgr-fastback-workstation/
IBM will be running a beta program for the next release of this product, providing those taking part with early access to the latest planned enhancements. If you would like to participate, please contact the beta coordinator, Matthew Boult (firstname.lastname@example.org).
NEW!! Technical Services Webinar: Capacity Planning in a Tivoli Storage Manager Environment
As much as customers would like to "backup everything and keep it forever", storage is not unlimited. The reality of ever increasing data growth, combined with regulatory compliance and the associated risks make the arduous task of capacity planning for backup ever more critical. A new Reporting and Monitoring tool is available with Tivoli Storage Manager (TSM). This new tool, based on IBM Tivoli Monitoring, can collect and report on historical data and is an integral part of a capacity planning regimen. Presenters:
This session will demonstrate a capacity planning methodology that conforms to the ITIL Capacity Planning process description by showing how the TSM Reporting and Monitoring tool and other TSM components can be utilized for to ease the pain of capacity planning. Additionally, this session will look at strategies, like data deduplication, to reduce the amount of backup data while maintaining regulatory compliance.
Mark Vanderboll, IBM Tivoli Global Response Team
Dave Daun, IBM Advanced Technical Skills
Access the webinar here: http://bit.ly/qdOuJU
Did you see that we announced a new version of Tivoli Storage FlashCopy Manager!
Here are the highlights of IBM® Tivoli® Storage FlashCopy® Manager V3.1:
- Advanced data protection and recovery features for VMware vSphere environments
- Enhanced data protection capabilities for Microsoft® Windows®, including support for New Technology File Systems (NTFS) and custom applications, and enhanced user interfaces for Microsoft Exchange and Microsoft SQL Server
- Support for IBM DB2® and Oracle databases (with or without SAP environments) on IBM AIX®, Solaris SPARC, Linux® x64, and HP-UX IA64 platforms
- Support for custom business-critical applications on IBM AIX, Solaris SPARC, Linux x64, HP-UX IA64, and Microsoft Windows platforms
- Transparent integration with IBM storage systems such as IBM System Storage® SAN Volume Controller space efficient FlashCopy target volumes, IBM Storwize® V7000, IBM XIV® Storage System, and IBM System Storage DS8000™
- Can leverage the Microsoft Volume Shadow Copy Services (VSS) framework for integration with non-IBM hardware subsystems
- Database cloning support for UNIX® and Linux clients
It will be generally available on October 21, 2011.
Check out the full announcement here:
Check out the NEW Tivoli Storage Manager v6.3
You should expect more from your storage, and from your storage vendor. On October 11 and 12, IBM is announcing a broad range of new and enhanced storage products that help to meet this challenge.
Included are significant updates to the Tivoli Storage Manager (TSM) family. TSM is already the data protection software leader in scalability, functionality, data reduction, performance and reliability. The v6.3 release will keep us ahead of the competition, and will help to keep you ahead of the challenges you’re facing. Struggling with data growth? No problem.
The scalability of TSM is being doubled for the 3rd year in a row, now supporting up to 4 billion data objects in a single TSM Server. In 2008, the internal database limit was 500 million files, so we’ve made an 8X improvement since then. That means fewer backup servers are needed. And remember that TSM is a single server architecture; we don’t add “media servers” to provide scale.
Unified Recovery Management now includes Replication for faster Disaster Recovery
We’ve been working to simplify the data protection and recovery infrastructure by unifying the management of all the different tools you need for different applications, operating systems, data locations, and data loss causes. In the release of Tivoli Storage Manager Extended Edition v6.3, we’re adding client data and metadata replication to an off-site TSM Server. This provides a “hot standby” disaster recovery capability, managed from within the TSM Admin Center. The replication is asynchronous and can be scheduled on a per client basis to minimize the impact on network bandwidth. And it can be configured in a classic source-to-target model as well as between two active sources, many-to-one, or in a “round robin” architecture.Simplifying the administrator’s life
One of the painful tasks an administrator has to do, especially in large environments, is patching the backup/archive client software on protected systems. With this release, we’re adding the ability to automatically push out client software updates across AIX, HP-UX, Linux and Windows systems (Windows push support was actually introduced last year). This new capability should reduce the time needed to perform an update by at least 80%.Improved integration with VMware
Tivoli Storage Manager for Virtual Environments v6.3, our non-disruptive off-host solution for VMware virtualized servers, now supports VMware vSphere 5 and includes a plug-in for vCenter to easily manage TSM backup and restore operations from within the VMware environment. Tivoli Storage FlashCopy Manager v3.1 is also being released with VMware vStorage APIs for Data Protection integration and the vCenter plug-in to provide hardware-assisted application-aware snapshot management. Support for DB2, Oracle and SAP databases on HP-UX is also added in the new FlashCopy Manager release.Something BIG for mainframe customers
Tivoli Storage Manager for z/OS Media v6.3 is a new connector that enables customers to leverage their mainframe-attached FICON storage devices for storing TSM data. This offering won’t increase the licensing costs for existing customers that move their TSM v5.x Server software from z/OS and install TSM v6.3 Server on an AIX system or a Linux on z partition, and gives them access to all of the cost-saving improvements made in TSM over the past 3 years.The new standard in licensing simplicity
In June we announced the availability of the Tivoli Storage Manager Suite for Unified Recovery. This bundle of ten TSM and FastBack products is simply licensed by the amount of data being managed within the TSM environment, first copy only. We have seen outstanding results from this new offering, from both new and existing customers. The reason is simple: you want to use the right tool for each data protection job, but you don’t want to have to worry about acquiring and managing individual product licenses for each one. This is especially true in virtual server and cloud environments. Added benefit: our broad range of built-in data reduction technologies can dramatically reduce the costs of this offering.
Tivoli Storage Manager Suite for Unified Recovery v6.3 is also being announced, and includes all of the TSM and TSM for Virtual Environments enhancements noted above.What else?
Many other improvements are being introduced across the family, including: better reporting and monitoring; better scalability for Microsoft Windows, Exchange and SQL Server; faster internal processes such as database backup; etc. SAP customers using TSM for Enterprise Resource Planning v6.3 can now do incremental backups on Oracle RMAN.
For more information on the Tivoli Storage Manager enhancements, please refer to the announcement letter on ibm.com (link
To learn more about all the new IBM Storage announcements, please click here
(live 12 Oct.)
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
In continuation to my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray
, I would like to go ahead and call out some of the advanced capabilities that Tivoli Storage Productivity Center
Ron’s recent post on choosing the right storage hypervisor
points out to ‘comprehensive performance monitoring’ as one of the key capabilities you need to successfully deploy cloud storage. This thought reinforces the need for sophisticated tools that can help you significantly reduce the burden on storage configuring (think of best practices) and performance monitoring.
Bottle neck analysis
It’s no longer the network administrator – when the system response is poor, it’s the storage administrator who gets the call. Especially in a virtualized environment, it is essential to have performance monitoring tools that provide a quick yet comprehensive view of the data path – to ascertain any bottlenecks. With Tivoli Storage Productivity Center, you will be able to see where the bottlenecks have occurred, for example one storage subsystem may be over utilized while another is underutilized.
Data Path Explorer offers detailed view of all the storage entities and their connectivity. It provides you performance information across the entire data path – from host to array – and allows to drill down and view performance metrics at the port-level. Standard Edition
, the advanced module within Tivoli Storage Productivity Center, offers advanced reporting capabilities on bottleneck analysis.
According to a storage manager at a leading medical university, “With Tivoli Storage Productivity Center, I can quickly determine if there exists a bottleneck in the SAN infrastructure. Earlier it could take me days or sometimes weeks to figure that out. Now, I can do it in minutes”.
If you have recently deployed Tivoli Storage Productivity Center, make use of IBM’s Value Pack service offerings, which provide analysis of disk subsystem performance bottlenecks using native product tools. Talk to your IBM sales representative or IBM business partner for more information.
Configure your SAN the best practices way
Administrators are expected to ensure high availability for SANs. SAN configuration has traditionally been done manually. But as the complexity in managing the storage network grows, you need sophisticated tools to control and even optimize storage configurations. And adherence to best practices is essential for successful configuration and deployment of complex systems in your storage environment.
I touched upon the SAN Planner topic briefly in my previous post – and would like to delve little deeper in this one. As mentioned earlier, SAN Planner is a wizard-based tool that assists storage administrators in end-to-end planning involving all storage components and related networks. SAN Planner helps implement best practices pertaining to replication relationships; it utilizes current and historical performance metrics to recommend the best configuration while commissioning new storage systems.
There are three planners associated with recommending storage configuration changes, which are based on current workloads, capacity utilization and best practices:Volume Planner
– helps administrators in provisioning storage based on capacity, compression, RAID levels, etc. It includes replication planning as well, supporting sessions such as Metro Mirror, Global Mirror and FlashCopy.
– provides zoning and LUN masking configuration support.
– assists in planning and implementing storage provisioning for hosts and storage systems with mutilpath support in fabrics.
All the three planners can be invoked separately or together in an integrated manner from Tivoli Storage Productivity Center console. Learn more about these planners and their capabilities: download the latest Redbooks
As you can see, configuring SAN with Tivoli Storage Productivity Center is a child’s play, isn’t it? But can you check whether current SAN configuration conforms to best practices? Yes you can!
SAN Configuration Analyzer provides end-to-end check for configuration policies, ensures the correctness of storage network configurations, such as zoning, multipathing and replication. In addition, the tool sends alerts to administrators when the best practices are violated.
Storage networks are undergoing significant changes more often to accommodate changes in business policies and the ever growing data. Administrators are challenged to track configuration changes for problem determination, change management or auditing purposes. Tivoli Storage Productivity Center offers SAN Configuration History Viewer to provide a historic view of changes and eliminate time gap in determining problem areas associated with configuration changes.
To learn more about the IBM Tivoli Storage Productivity Center Select Series, contact your IBM sales representative or IBM Business Partner, or visit ibm.com
to join the virtual dialogue on Storage Hypervisor; share your thoughts and concerns in our group chat on October 7, 2011 from 12 noon to 1pm Eastern Time
. You can log in now for a preview of topics.
In response to: Enabling Private IT for Storage Cloud -- Part II (management controls)
To see a transcript of the live chat held on Friday, September 30th
about this topic visit this link:
And don't forget to listen to the 'open mic' conversation about
Storage Hypervisors with IBM's Ron Riffe, the author of this blog
series, and ESG analyst, Mark Peters:
This is part 3 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I
introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. Part II
explained how a storage service catalog, self-service provisioning, and usage-based chargeback can be used to drive down cost. In this 3rd post, I’m going to share some thoughts that should help you be smarter about choosing a storage hypervisor.
The first step is to remind ourselves what we’re trying to accomplish with a storage hypervisor. From our experience deploying over 7000 storage hypervisors, the starting point for most folks is improved efficiency and data mobility. Remember, the basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, people are looking for the freedom to move a slice (or virtual volume) from tier to tier, from vendor to vendor, and more recently, from site to site all while the applications are online and accessing the data.
To pull off this level of mobility – in servers or storage – it’s important that the hypervisor not be dependant on the underlying physical hardware for anything except capacity (compute capacity in the case of a server hypervisor like VMware, storage capacity in the case of a storage hypervisor). Think about it… Wouldn’t it be odd to have a pair of VMware ESX hosts in a cluster, one running on IBM hardware and one on HP hardware, and be told that you couldn’t vMotion a virtual machine between the two because some feature of your virtual machine would just stop working? If you tie a virtual machine to a specific piece of hardware in order to take advantage of the function in that hardware, it sort of defeats the whole point of mobility. The same thing applies to storage hypervisors. Virtual volumes that are dependant on a particular physical disk array for some function, say mirroring or snapshotting for example, aren’t really mobile from tier to tier or vendor to vendor any more.
But it’s more than just a philosophical issue, there’s real money at stake (you may want to read what comes next a couple of times). In Part II of this post I discussed using a storage service catalog as a means of defining specific service level needs for your different categories of data. These service levels covered the gamut from capacity efficiency and I/O performance (for you techies that’s RAID levels, thin provisioning, use of solid state disks, etc), to data access resilience and disaster protection (multi-pathing, snapshotting, mirroring…). The reason so many datacenters have an over abundance of tier-1 disk arrays on the floor is because, historically, if you wanted to take advantage of things like thin provisioning, application-integrated snapshot, robust mirroring for disaster recovery, high performance for database workloads, access to solid-state disk, etc… you had to buy tier-1 ‘array capacity’ to get access to these tier-1 ‘storage services’ (did you catch the subtle difference?) Now, I don’t have anything against tier-1 disk arrays (my company sells a really good one). In fact, they have a great reputation for availability (a lot of the bulk in these units are sophisticated, redundant electronics that keep the thing available all the time). But with a good storage hypervisor, tier-1 ‘storage services’ are no longer tied to tier-1 ‘array capacity’ because the service levels are provided by the hypervisor. Capacity…is capacity…and you can choose any kind you want. Many clients we work with are discovering the huge cost savings that can be realized by continuing to deliver tier-1 service (from the hypervisor), only doing it on lower-tier disk arrays. As I noted in Part II of this post, we’ve seen clients shift their mix of ‘array capacity’ from 70% tier-1 to 70% lower-tier arrays while continuing to deliver tier-1 ‘storage services’ to their data. This YouTube video
describes an example of that at Sprint.
Smart idea #1: Be careful to understand what, if any, dependency a storage hypervisor has on the capability of an underlying disk array to deliver function to your virtual volumes (like thin provisioning, compression, snapshotting, mirroring, etc.)
Next thought. There are three rather interrelated solution categories in the area of dealing with outages and protecting data.
- Disaster avoidance (“I know the hurricane is coming, let’s move the datacenter further inland”)
- Disaster recovery (“oh oh, the hurricane hit, and my datacenter is dead”)
- Data protection (“oops, I goofed up my data and I need to recover”)
IT managers we talk to have been successfully dealing with disaster recovery (for the techies, that’s array mirroring along with recovery automation tools like VMware Site Recovery Manager
(SRM), IBM PowerHA
, or others) and data protection (that’s array snapshotting along with specific connectors for databases, email systems etc as well as connectors to enterprise backup managers like Tivoli Storage Manager) for years. This third area of disaster avoidance has really gained steam because storage hypervisors now allow you to access the same data at two locations giving you the ability to do an inter-site application migration with things like VMware vMotion
, PowerVM Live Partition Mobility
(LPM), or others. When you are expecting a disaster, disaster avoidance let’s you transparently get out of the way. But it doesn’t magically keep all the other unexpected bad things from happening. You’ll still want to be prepared with disaster recovery and data protection. And if you are implementing a storage hypervisor, you shouldn’t be forced to choose.
Smart idea #2: Remembering smart idea #1, be sure that your storage hypervisor has its own ability to provide for disaster avoidance (inter-site mobility), disaster recovery (mirroring that’s integrated with recovery automation tools) and data protection (snapshotting that’s integrated with applications and backup tools).
One final thought. A storage hypervisor isn’t an island unto itself. Like a server hypervisor, it exists in a broader datacenter. Administrators need to be able to see it in the context of the disk arrays it manages, the servers (or virtual machines) that use its virtual volumes, the applications that need backups or clones, the disaster recovery automation that’s coordinating recovery of servers, storage, networks… You get the picture. When the challenges of day-to-day operations happen (and they do happen most every day)…
- …the storage network planner needs to look at the logical data path from a virtual machine to its physical server, through the SAN switch, to the storage hypervisor and finally to the physical disk array. He’ll need that storage hypervisor to be integrated with a SAN topology tool.
- …an application owner calls up with a performance issue that he’s blaming on ‘the storage’. You’ll need to be able to isolate performance across the whole data path (including the part of the path that goes through the storage hypervisor).
- …an application owner wants a consistent snapshot of his application to use as a backup copy (or a test clone). You’ll need a connector that talks to both the application and the storage hypervisor to identify the virtual volumes that need to be snapshotted, prepare the application for the snapshot, and then provide the application owner with an inventory of snapshots he can use to recover from.
- …you make the move toward cloud techniques in your private datacenter – implementing a storage service catalog, self-service provisioning, and usage-based chargeback. You’ll need a storage hypervisor that can be auto provisioned and that can provide the metrics on who is using how much storage.
Smart idea #3: Make a list of all the day-to-day operational things you do today with physical storage, and the things you hope to automate in the future, and be careful to understand if your storage hypervisor is sufficiently instrumented and integrated – or if it’s creating a new island to be separately managed.
And now a word from our sponsors :-) IBM offers the worlds most widely deployed storage hypervisor. With over 7000 deployments, hundreds in the newer inter-site disaster avoidance configuration, we’ve had a lot of opportunity to build some depth. As you evaluate using cloud storage techniques in your private enterprise, you’ll find things I talked about in this blog series available in IBM products today. They can help you save your company a pile of money (and make you look like a genius while you’re doing it).
Storage hypervisor platform: IBM System Storage SAN Volume Controller
(SVC)Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition
(TPC SE)Usage-based chargeback: Tivoli Usage and Accounting Manager
Thanks for staying with me through this blog series – hope you find it valuable!
The conversation continues!