IBM Systems Storage Software Blog
Amalore Jude 270003DGKQ email@example.com Tags:  tivoli-storage-productivi... tpc ibm-storage storage-management ibm-srm storage-resource-manageme... storage-blog storage-software ibm-tpc srm 1,577 Visits
Amalore Jude 270003DGKQ firstname.lastname@example.org Tags:  tpc storage-management storage-blog ibm-tpc storage-resource-manageme... tivoli-storage-productivi... storage-software ibm-storage ibm-srm 2,130 Visits
Amalore Jude 270003DGKQ email@example.com Tags:  srm ibm-tpc ibm-srm storage-resource-manageme... tpc ibm-storage tivoli-storage-productivi... 2 Comments 10,357 Visits
Gartner’s Magic Quadrant for SRM and SAN Management software is one of the leading industry publications that provides competitive benchmarking across storage management capabilities and helps support decision making for investments in storage management software. In its latest edition, Gartner positions IBM in the ‘Leader’ quadrant.
IBM Tivoli Storage Productivity Center (TPC) is a clear leader in the SRM market; many enterprises are using TPC today to manage their ever-growing, complex and highly critical storage environments.
TPC is designed to provide comprehensive device management capabilities that include automated system discovery, provisioning, configuration, performance monitoring and replication for storage systems and storage networks. TPC provides storage administrators a simple yet effective way to conduct storage management for multiple storage arrays and SAN fabric components from a single integrated management console.
TPC edges out all other vendors in terms of comprehensively achieving the vision for SRM. TPC provides storage management capabilities that allow administrators to efficiently simplify, centralize, optimize and automate storage management tasks. View the Gartner Magic Quadrant for Storage Resource Management and SAN Management Software, compliments of IBM, here.
If you haven’t unleashed the potential of TPC, watch out for the upcoming version 5.1 release – slated to be announced on June 4, 2012 at the IBM Edge2012.
To learn more, please register for IBM's premier storage conference: IBM Edge2012 being held June 4-8 in Orlando, Florida. This is a 4.5 day conference, 100% focused on IBM storage solutions - with many TPC 5.1 and IBM SmartCloud Virtual Storage Center sessions and customer speakers. Tivoli speakers will be featured throughout the conference and more than 30 sessions will be focused exclusively on Tivoli’s entire suite of products, taught by IBM Distinguished Engineers, leading product experts, clients and partners. Special registration discount applies to all Pulse 2012 attendees! Register here.
Note: This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available here http://www.gartner.com/technology/reprints.do?id=1-1A16V0B&ct=120405&st=sb
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
Maria Huntalas 1200007VFS firstname.lastname@example.org 1,960 Visits
IBM Edge2012 is a 4.5-day premier storage event that brings together innovative IBM technologies, world class training, leading industry experts, and compelling client success stories and best practices. With over 250 technical sessions, , product demonstrations, hands-on labs, and exhibitions geared to roles spanning from business leaders to IT practitioners, Edge2012 is dedicated to helping you design, build and implement efficient storage infrastructure solutions. Tivoli Storage will be featured in over 35 sessions and several product demos at this event. You won't want to miss this!
Edge2012 will feature three separate events:
IBM System Storage Technical University at IBM Edge2012 (known as Technical Edge) will provide 4.5 days of world-class training with more than 250 technical sessions, hands-on labs and on-site certification, taught by IBM Distinguished Engineers, leading product experts, customers and partners. Technical Edge is sponsored by IBM Partner, Intel. Perhaps the best part about Technical Edge is the involvement of customers in putting together the individual sessions. Sharing expert best practices is so important, as organizations are struggling with monumental data growth which is outpacing their IT budgets. IBM Technical Edge is specifically designed to help IBM customers and partners keep pace with techniques to improve their storage efficiency.
With shrinking budgets, education and technical expertise become
more important than ever to keep your data center running at maximum
performance. An upcoming IBM Global Data Center Study reveals
that fewer than 1 in 5 data centers are highly efficient. Data centers
that run efficiently can allocate 50% more of their IT budgets to new
With input from IBM Customers and Business Partners, IBM has
developed an agenda that will cover some of the most compelling topics
in IT Storage today. The full agenda of technical classes is attached
to this blog post as a PDF file. Click here to download descriptions of all the sessions.
Attendees will be able choose from a wide variety of sessions which have been arranged into the following tracks:
IBM Technical Edge is part of IBM's Edge Conference taking place June 4-8th in Orlando, Florida. Storage Community members can save on registration by registering early, before May 6th.To learn more about Technical Edge, go to www.ibm.com/edge
Maria Huntalas 1200007VFS email@example.com Tags:  virtualization;data backup;backup diaster storage;storage & recovery;data network;data storage;data continuity;data area recovery protection;deduplication software;cloud recovery;storage ibmpulse; backup software; 3,718 Visits
Today’s general session kicked off a bit later than usual this week after an evening of rockin out with Maroon 5! The MGM Garden Arena was wall-to-wall IBMers (with a spattering of party crashers) masquerading as concert-goers as cameras flashed, video cameras whirled, and everyone competed (in classic IBM fashion) for the "best Maroon 5" photo contest. You can check out my Pulse 2012 Storage Management photos here: http://bit.ly/yHU78O
Now, onto today’s General Session, which has been anticipated all week, due to Steve Wozniak’s appearance onstage with Grady Booch. More on that later........
First, kicking off the final General Session of Pulse 2012, Erich Clementi, Senior Vice President of GTS talked about the pressures of a Smarter Planet: where everything is instrumented, interconnected, and intelligent. He discussed IBM's SmartCloud platform and provided examples of how IBM is helping clients get beyond virtualization by offering deployment choices across private and hybrid clouds, managed services or delivered as software as a service. He also stressed that to re-think IT and reinvent your business, you need a trusted partner.
Next up: Helene Armitage, General Manager of STG, discussed how the consumer data explosion will have a tremendous impact on systems innovations and how this is driving the infrastructure of the future. An impactful data point she cited: 80% of people will have mobile devices in the next 5 years, which has significant implications to how we build data centers. In this scenario, I especially liked the challenge she posed to us to assume a leadership role in figuring out the greatest value to maximize business outcomes.
Now, on to Watson.........This presentation, by Manoj Saxena, General Manager of IBM Watson Solutions, was especially moving as he discussed the real-life impact that Watson is having in the healthcare industry: acting almost like a physician’s assistant, and helping in disease diagnostics. Across so many industries, Watson has been tapped to address huge challenges that leverage Watson’s analytical technology. Interesting to note that this technology definitely plays in a Cloud-based IT environment.
As the finale to the three days of Pulse general sessions, IBM Fellow Grady Booch interviewed Steve Wozniak, Co-Founder of Apple Computer. Key topics focused on Wozniak’s fascinating life as an inventor, teacher, and entrepreneur. Such great stories he shared such as the time he and Steve Jobs used their technology "know-how" to crank call the Pope. Seriously, though, Wozniak was so passionate about the importance of educating kids on computers and programming and the meaning of 1s and 0s, that, after his stint with Apple, he went on to teach 5th graders for awhile. Scott Hebner joined Grady and Woz on stage to take questions from Twitter, using the hashtag #askwoz. And, there were some great ones.......like: "what’s the next killer app, Woz?" How ‘bout Watson for the iPhone?! And, when asked what advice he’d offer IBM? Woz says: "Stay as a marketing driven company. You know your customers’ needs, and that is key. I admire that.!" Thanks, Woz! Will do!
Drumroll, please.........Now on to all the Storage Management happenings at Pulse, Day 3. There were 2 simultaneous storage sessions that kicked off this morning: LV 1871 and their virtualization journey, and Hertz Australia’s TSM 6 and TSM SUR experiences. LV 1871, a German insurance company, discussed how IBM SAN Volume Controller and Tivoli Storage Productivity Center have helped them increase its business agility, enable a standardized management console in the data center, and elevate IT service levels. Meanwhile, in the next room, Hertz Australia’s Richard Whybrow (with Hertz mascot Horatio) spoke about Hertz’sTSM 6 experience and how they also had considered CommVault and NetBackup, but IBM was the most cost effective choice by far. We like that! As a sidenote, Richard was also the IBM Tivoli User Group video winner with his video of how he uses TSM at Hertz. Later in the day, Richard also participated in a customer video interview for us, in which he re-stated on camera how CommVault and NetBackup was far more expensive than TSM.
Later in the day, the storage sessions continued with Peer 1 Hosting discussing how they leveraged the data reduction capabilities of TSM to effectively manage thousands of customers' backup and recovery environments. Also, the Principal Financial Group reviewed best practices and capabilities co-developed by IBM and Principal Financial Group, which enable TSM VE to execute parallel backup and restore operations on multiple virtual machines simultaneously.
The final storage session of Pulse featured Tivoli BP Frontsafe discuss the TSM portal cloud management solution, which greatly maximizes the manageability and effectiveness of your TSM environment, basically, allowing you to deliver TSM as a Cloud Service. Key benefits highlighted include: faster way to bring TSM to market with few resources needed; eliminates the complexity of client-side TSM administration; easy-to-use daily reporting and support tools; and, lets you set up multiple layers of distribution (OEM branded all the way down). Also, this solution makes TSM available to small- and medium-sized companies.
As I wrapped up Pulse 2012 with a few last minute photo opportunities for Tivoli Storage, and ended the evening with a spectacular meal at Todd English’s Olives Restaurant in the Bellagio with colleagues, I couldn’t help but think "best Pulse ever," but, I only have 2 under my belt, now, so what do I know? But, really, how DO we top Maroon 5 AND Steve Wozniak together at a single Pulse?? I can already hear the creative drumbeat of Pulse 2013 in the distance now............
Martha Westphal 0600012U29 firstname.lastname@example.org Tags:  data-reduction deduplication service-management compliance business-continuity storage-blog backup retention tivoli unified-recovery-manageme... risk-management archive data-protection restore recovery disaster-recovery 1,814 Visits
ILuminate kicked off the General Session with an innovative, cool to watch performance – and as Scott said, their performance plus coffee makes for a wide awake audience!
Steve Mills took the stage – his expertise and client focus really shine through, but the best part was hearing about how IBM “eats its own cooking”. He shared about the IT transformation that IBM has undergone, under Jeannette Horan’s direction, to increase productivity and efficiency while reducing costs. Did you know that IBM has to manage over 100 petabytes of production data? How do you think they do that – you’re right, Tivoli Storage Solutions!! Steve has such a way with words. I especially loved the sound byte, “Linux runs like a scalded dog on the IBM mainframe.”
Next up was Bob Picciano – he brought up an impressive panel of customers from Equifax, Rogers, GE and Erie County. Each had a unique story to tell about working with IBM to optimize their business.
The Storage sessions today continued to support the main themes of unified data protection, storage virtualization, and cloud. There were proof points from Bank of China with their consolidated backup and recovery environment, and Unum who delivered a presentation around Storage Virtualization using San Volume Controller and Tivoli Productivity Center. And speaking of TPC, I hope you made it down to the Solutions Expo to see the new GUI for TPC (@ Ped 44) that is being tested right now with customers. Butterfly Software also had a session, talking about data center consolidation – if you haven’t heard what Butterfly can do for you, you owe it to yourself to learn! In fact, all our partners in the Solutions Expo have said what a great show this has been for them so far.
Even though these are long days, it’s so good to see everyone, hear from everyone, and learn so much! And tonight, WE DANCE! Maroon 5 takes the stage this evening, and I think everyone is ready for it! Remember we get to sleep in tomorrow, and hear from Steve Wozniak, co-founder of Apple Computer, as Day 3 focuses on Innovation.
Richard Vining 2700019R2A email@example.com Tags:  online recovery continuity;data virtualization;data recovery;data storage;storage diaster protection;deduplication software; backup;backup storage;data area network;data recovery;storage software;cloud migration;business 2,341 Visits
The Storage Management track at IBM PULSE 2012 kicked off in a big way this afternoon, with a presentation by Laura DuBois, vice president of the storage practice at analyst firm IDC. Laura reviewed where data and storage management technologies have been (focused on the core data center), where they are (spreading to the edge) and where they are going in the future (cloud services).
Ron Riffe, Tivoli Storage Product Manager, then shared IBM's vision for smarter data and storage management in two strategic areas - reducing the percentage of the storage IT budget that is dedicated to managing the copies of data, and controlling the overall cost of storage in the face of continuing data growth and service level expectations.
The first goal is addressed through Unified Recovery -- a path that the Tivoli Storage Manager (TSM) family has been on for the past 2 years and will continue with new capabilities in integrating the complex array of data protection technologies into a single user interface, reducing costs and avoiding serious risks when trying to manage many different point solutions for protecting the data in different types of systems (including virtualized systems), applications and locations.
With the planned enhancements to Unified Recovery, our customers will be able to:
Controlling overall storage costs is accomplished by a new solution suite -- the IBM SmartCloud Virtual Storage Center -- a storage hypervisor that virtualizes and manages heterogeneous storage systems. With this set of integrated capabilities, organizations can:
Following the super crowded (standing room only) kickoff session, we started into a fantastic set of breakout sessions over the remainder of Pulse week. Today's sessions included:
Tomorrow promises to be even more exciting, with a "main tent" demonstration of the SmartCloud Virtual Storage Center in the MGM Grand's Grand Garden Arena, many more informative breakout sessions (including my panel session on Modernizing Data Protection) and ending with a party and concert by Maroon 5.
Additional Related Links:
Livestream videos from Pulse (Pulse folder)
Tivoli User Group (TUG)
Follow Pulse on Twitter with the #ibmpulse hashtag, or our Twitter accounts: @servicemgmt, @ibmtivoli, @ibmpulse, @assetmgmt, @ibmstorage, @ibmsecurity and @ibmcloud
Follow our blogs: Service Management, Asset Management, Pulse, Storage
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  area storage;data protection;deduplication recovery;storage recovery software; virtualization;data online recovery;data diaster continuity;data backup;backup software;cloud storage;storage network;data migration;business 1,873 Visits
Pre-Pulse Tivoli Storage Management activities kicked off Saturday, March 3 with the Tivoli Storage Business Partner Summit. We had a strong showing of storage Business Partners, and the summit was a great way to gear up for Pulse, which kicked off last night with the Grand Opening & Welcome Reception in the Pulse Solution Expo Hall. Thanks to all our Tivoli Storage Business Partners for attending the Pre-Pulse Tivoli Storage BP Summit and sharing valuable insights!
During this BP Summit, we heard from both IBM and Business Partners who covered key topics such as:
-TSM 6 migration with Butterfly
-TSM competitive positioning
-SmartCloud Virtual Storage Center
-STG cloud initiatives & other STG opportunities
-Key trends in the storage marketplace, and
-TMS Suite for Unified Recovery.
You can learn more about these key storage topics in the Storage Management track, which kicks off today, Monday, at 2PM in Room 117. Speakers include Steve Wojtowecz, VP of Storage Software Development and Bina Hallman, Director Tivoli Storage & System z Software Product Manager. Joining these IBM speakers, is IDC Analyst Laura DuBois who will address key storage trends. Following this kickoff session, there will be several Storage Management sessions at Pulse in rooms 115 and 117, as well as key demos in the Solution Expo. More details can be found on the Pulse site under the Cloud & Data Center Optimization Stream under Storage Management Track. And, don’t forget to leverage the SmartSite Agenda Builder so you don’t miss out on any key Pulse storage sessions!
Continuing the recap of the Saturday BP Storage Sunmit, we listened to Butterfly Software present their successes with TSM migrations, leveraging their assessment tool. You wont’ want to miss their live data migration demo during the Monday storage Birds of a Feather session #1387 at 6PM in room 117. Special refreshments to be served!
We also heard from Frontsafe, winner of a Business Partner Award today at the Pulse BP Summit Day. You can also hear more about Frontsafe’s “Backed by TSM” solution at storage session 1360 on Wednesday, 3:30PM in room 115. Also, check out Frontsafe’s Livestream interview on Monday, 2:30PM at the Expo Stage in the Solution Center. You can watch this interview on the Pulse Livestream channel, as well. Backed up by IBM Tivoli Storage Manager (TSM) is a unique Ready for IBM Tivoli program classification for clients, business partners and managed service providers who use the IBM Tivoli Storage Manager family of offerings as a core component of a data protection and recovery managed services or cloud-based offering. It is IBM’s new partner program for validated TSM cloud solutions. Visit the Ready for Tivoli / Backed up by IBM TSM in the BP Café (part of the Solution Expo Center).
Partnering with Frontsafe, another storage BP, Starfire Technologies, joined Frontsafe’s BP program recently. You can listen to Richard Spurlock, CEO of Starfire, speak more about this partnership during his Pulse interview on the Livestream stage in the Solution Expo. Check it out here: Starfire Technologies Expo Stage Interview.
As Tivoli storage just finished a stellar year
of significant growth in the marketplace, 2012 promises to be another strong
year with continued focus in key growth areas such as: Data Protection & Central Management
& storage hypervisor. Please join us
at Pulse in rooms 115 and 117 in the Conference
Center all week to learn
more! And, keep your eye on the Tivoli Storage Blog here for all Storage at Pulse happenings........
Additional Related Links:
Richard Vining 2700019R2A email@example.com Tags:  recovery backup restore data-reduction cloud-storage disaster-recovery wan retention deduplication business-continuity archive storage-blog compliance acceleration service-management risk-management data-protection 2 Comments 4,338 Visits
What do you think of when you see the name Riverbed? For those of you not familiar, Riverbed is an IBM partner and the leader in Wide Area Network Optimization. These days, Riverbed offers more than just WAN OP solutions. Riverbed products improve IT infrastructure, speed up application performance, reduce bandwidth utilization, and offer solutions to securely leverage cloud storage. For enterprises looking to implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery, Riverbed delivers optimum performance for globally connected enterprises without compromising the end user experience.
Steelhead® appliances from Riverbed, Virtual Steelhead(TM), and Steelhead Mobile can increase network throughput and application performance by up to 100 times. Riverbed Cascade® provides enterprise-wide network and application visibility and analysis for both enterprise customers and service providers. Riverbed Whitewater® cloud storage gateways revolutionize data protection by leveraging cloud storage. And Stingray Traffic Manager® provides unprecedented scale and flexibility to deliver applications across the widest range of environments. All in all, Riverbed offers end to end solutions to analyze, accelerate and optimize an organization’s IT infrastructure without compromising performance for the end user no matter how far away they reside from the data center.
Stop by the Riverbed booth E105 at IBM PULSE 2012 to see the latest in IT performance solutions.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  data-protection restore storage-blog risk-management disaster-recovery recovery compliance archive retention data-reduction acceleration service-management cloud-storage wan deduplication backup business-continuity 4,022 Visits
Riverbed and IBM enjoy a strong partnership which, thanks in part to Riverbed’s Whitewater cloud storage gateways, extends to IBM’s storage management software ecosystem. Whitewater leverages public cloud storage to reduce backup and administration costs, improve disaster recovery readiness and provide secure off-site storage for critical backup data, providing LAN-like access to public cloud storage in a drop-in appliance.
What does this mean for the Riverbed/IBM partnership? A seamless integration with existing IBM Tivoli Storage Manager backup infrastructure and cloud-storage providers, paving the way to extracting more value from existing storage, application and network investments. Tivoli Storage Manager administrators can leverage Whitewater’s local caching and public cloud storage abilities to propel them into the next generation of storage and disaster recovery, leaving classic disk- and tape- based devices (and their operational and maintenance costs) behind. Together, Riverbed and IBM offer a best-of-breed solution which slashes costs and enables almost unlimited scalability, taking full advantage of the flexibility and cost savings offered by storage-cloud services.
Riverbed will be demonstrating how fast it can move TSM data to public cloud storage at IBM Pulse 2012 in Las Vegas, March 4-6. At the show, come by booth E-105 to ask for a Whitewater demonstration and learn more about how Riverbed can optimize and extend your TSM environment as well as accelerate your WAN with the Riverbed Steelhead product family.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Richard Vining 2700019R2A email@example.com Tags:  deduplication retention data-reduction disaster-recovery archive storage-blog business-continuity recovery pulse restore service-management data-protection compliance risk-management backup unified-recovery-manageme... 2,357 Visits
Are you going to IBM Pulse 2012, the premier Cloud and IT Optimization event of the year? It’s at the MGM Grand in Las Vegas from March 3 – 7, and we have an awesome agenda with some first class speakers and entertainment.
But this blog is about our storage management software ecosystem partners that will also be attending and lending their support. If you will be at Pulse, please plan to visit with these companies while enjoying the refreshments offered in the Expo Center:
And of course, I’ll be there as well. You can catch me around the storage pedestals in the Expo Center, and at breakout session 2136 (5:00 Tuesday, room 117). I hope to meet many of you there.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
IBM TSM for Virtual Environments and IBM Tivoli Storage Productivity Center Standard Edition debut in the new VMware Solution Exchange
On January 25, 2012, VMware officially unveiled its VMware Solution Exchange (VSX), an online virtualization and cloud marketplace designed to help customers, partners and developers locate and purchase VMware-certified products. The VSX showcases solutions from VMware's technology alliance partners, such as IBM. Included in VSX on day one were IBM Tivoli Storage Manager for Virtual Environments, v6.3 and IBM Tivoli Storage Productivity Center Standard Edition, v4.2.2.
IBM TSM for Virtual Environments is an excellent data protection and recovery solution for VMware environments because it:
IBM Tivoli Storage Productivity Center Standard Edition (TPC SE) delivers advanced management for virtual server and storage environments. It provides significant benefits such as:
If you want to learn more about these products while you are at Pulse 2012 be sure to visit the Solution Expo and visit our demonstration pedestals:
Finally, if you are already a fan of TSM for VE and/or TPC SE then log into the VMware Solution Exchange and write a product review. Also don’t forget to give the products a 5 star rating!
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  unified-recovery-manageme... restore retention storage-blog recovery deduplication disaster-recovery backup service-management archive risk-management data-reduction data-protection tivoli business-continuity compliance 2,436 Visits
Are you going to the IBM PULSE conference (ibm.com/pulse)? I am, and I am hosting a panel discussion on the need to modernize backup and restore capabilities.
Scheduled to join me on the panel are:
- Randy Olinger, Director of Enterprise Storage Systems, UnitedHealth Group
- Gerardo Colon, Storage Administrator, Adventist Health System
- Peter M. Nielsen, CEO and Founder, Front-Safe S/A
The premise of the panel discussion will be that backup and restore just aren't as easy as they used to be, given the increasing complexity and distribution of IT, the growth of data to unsustainable levels, the pressure to improve service levels by reducing and eliminating downtime, and the need to cut spending. Our panel of experts will share how their organizations are dealing with these and other challenges, and I'm guessing that we'll cover technology solutions such as data deduplication and compression, snapshots and CDP, replication, simplified and unified administration, archiving and data lifecycle management, and how to do all these things while driving down costs.
But that's part of the fun of a panel discussion -- you never really know what you're going to get. It's scheduled for Tuesday afternoon, March 6th at 5:00PM Las Vegas time, in room 117. The session number is 2136. I hope you can make it
Oh - and have you heard - Maroon 5 and iLuminate will be entertaining us during the event; you have to go!
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Achieving Ready for IBM Tivoli software validation shows customers that your solution meets or exceeds IBM compatibility criteria and successfully integrates with the IBM Tivoli Storage Manager family of offerings. Backed up by IBM TSM validation further demonstrates your offering as being an integral part of a TSM cloud or managed service solution.
Sound interesting? Want to learn more? Then be sure to stop by one of the following venues while you are at Pulse 2012 for more details about this new program and how you can participate:
· Business Partner Summit – Sunday, March 4 - Information on the program will be included in all the breakout sessions
· Business Partner Café - Visit the Ready for Tivoli / Backed up by IBM TSM table
· Solution Expo – Visit our demonstration pedestal, Optimizing Cloud Based Data Protection Services with Tivoli Storage Manager
Can’t wait until Pulse and want to learn more now? Then contact John Connor, on the IBM Tivoli Storage Manager product management team at email@example.com.
steve wojtowecz 270003B7NV firstname.lastname@example.org Tags:  tsm tpc cloud-storage tivoli-storage-manager managment storage 4,908 Visits
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data created and used has been in the form of digital data. Today, the world produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This presents a real concern of a “Digital Dark Age” where digital storage techniques and formats created today may not be viable in the future as the technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A storage tool that was so ubiquitous people still click on this enduring icon to “save” their digital work and any word, presentation or spreadsheet documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly denser than they are today. While new form factors such as solid state disks will help us provide more stable longer-term preservation of data, and the promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard drives and solid-state memory to overcome challenges of growing memory demand and shrinking devices. Called Racetrack memory, this breakthrough could lead to a new type of data-centric computing that allows massive amounts of stored information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide structure in midst of the data deluge
Now that we have the capability to preserve our digital universe, we need to find a way to make it useful. We need to take the next step past data preservation to data curation. Data curation is the active and ongoing management of data through its lifecycle. This smarter data categorization adds value to data that will help glean new opportunities, improve the sharing of information and preserve data for later re-use. Social media is a great example of the power of curated data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting, appraising and organizing data to make them accessible and interpretable. The key is bringing data sets together, organizing them and linking them to related documents and tools. If data can be stored in a way that provides context, organizations can find new and useful ways to use that data.
3) Storage analytics will open new business insights
With data curation allowing organizations the platform to better utilize their data, analytics will help turn that data into intelligence and, ultimately, knowledge. With the information that historical trending analytics and infrastructure analytics provides, you can index and search in a more intelligent way than ever before. By doing analytics on stored data, in backup and archive, you can draw business insight from that data, no matter where it exists. The application of IBM Watson technology for healthcare provides a good example. Watson collects data from many sources and is able to analyze the meaning and context. By processing vast amounts of information and using analytics, it can suggest options targeted to a patient's circumstances, can assist decision makers, such as physicians and nurses, in identifying the most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we can learn more with the information we have today to improve service to customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity – new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain industries are able to reach new levels of innovation by having the capacity to house, organize and instantaneously access information. For example, Hollywood is known for its big budget blockbusters, but it’s the big storage demands required by new formats such as digital, CGI, 3D and high definition that’s impacting not just the bottom line, but studios’ ability to produce these types of movies. Data sets for movies have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs as just one day’s worth of filming can generate hundreds of terabytes of data. The popularity of these high data-generating formats means studios are looking for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger data dilemma than the entertainment business. Take a look at the Institute University of Leipzig, in Germany, which has a major genetic study called LIFE to examine disease in populations. LIFE is cataloging genetic profiles of several thousand patients to pinpoint gene mutations and specific proteins. This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of data per year. Those figures will only grow with higher-resolution medical imaging, and new tools or services such as making electronic healthcare records available online.
5) Intervention...The Data Hoarder
In this era of Big Data, more is always better, right? Not so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too much time and money collecting useless or bad data, potentially leading to misguided business decisions. This practice can be changed with simple policy decisions and implementing existing capabilities in technologies that exist in smarter storage, but companies are hesitant to delete any data (and many times duplicate data) due to the fear of needing specific data down the line for business analytics or compliance purposes. Part of the solution starts with eliminating the copies. Nearly 75% of the data that exists today is a copy (IDC). By deleting and disabling redundant information, organizations are investing in data quality and availability for content that matters to the business. Consider the effect of unneeded data, costing money by replicating throughout an organization’s information systems. This outdated data can also potentially be accessed for fraud.
Raising the quality of data is not costly—not getting it right is.
ARE YOU SPEAKING AT PULSE? IF SO, READ ON PLEASE...and book your room at the MGM Grand today to avoid a price increase!
1. Have you uploaded your presentation? The deadline to upload presentations was January 20th to enable appropriate reviews and posting to the Pulse 2012 SmartSite Agenda Builder. Your presentation will be converted to PDF and can be downloaded or printed in advance by attendees, pending your approval. For a full list of presentation guidelines and processes please review the Presentation tab on the online Speaker Kit.
2. Do you know what audio visual equipment will be available in your session room? Click the A/V tab in your online Speaker Kit to review this important information.
3. Are you connected? Follow the conference news & highlights on Twitter or the Pulse blog. Click the Speaker Kit tab to find links and hashtags for use with social media. Find Pulse attendees using the Pulse SmartSite agenda builder.
4. Attendees are always interested in getting to know their speaker! Do you have a bio? Review and update your brief bio by logging onto the Speaker Kit website.
5. Have you started to build your Pulse conference agenda on SmartSite, the attendee conference portal? You will need your conference registration confirmation number to login to this site. Click the Build My Agenda icon to view scheduled sessions.
6. Have you registered for the conference and booked your hotel? Review the registration instructions listed in the registration tab on the speaker kit website.
Very important... Conference hotel accommodations are limited and available on a first-come, first served-basis. Conference rates are valid until January 27, 2012 or until the room block is sold out, whichever comes first.
Please take a few minutes to review the information in your online Speaker Kit, and follow-up on all speaker actions as needed.
If you have any questions or need additional information, please contact the speaker support at PulseSpeaker@experient-inc.com. We look forward to seeing you at the MGM Grand in Las Vegas March 4-7!
steve wojtowecz 270003B7NV email@example.com Tags:  tsm storage-management tpc storage 3,989 Visits
Healthcare, financial services, media and entertainment, and scientific research among many industries face the challenge of storing and managing the proliferation of data to extract critical business value. As storage needs rise dramatically, storage budgets lag, requiring new innovation and approaches around storing, managing, and protecting Big Data, cloud data, virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute for Computational Cosmology (ICC) at Durham University in the U.K. and Business Partner OCF to build a storage system to better store and manipulate Big Data for its cosmology research on galaxies. ICC is adopting the same IBM General Parallel File System technology used in the IBM Watson system to store and manage more than one petabyte of data from two significant projects on galaxy formation and the fate of gas outside of galaxies. The enhanced storage system will enable up to 50 researchers, working collaboratively to access and review data simultaneously. It will also help ICC learn to manage data better, storing only essential data and storing it in the right place.
New Storage Platform Delivers More Personalized, Visual Healthcare: A medical archiving solution from IBM Business Partners Avnet Technology Solutions and TeraMedica, Inc. powered by IBM systems, storage and software gives patients and caregivers instant access to critical medical data at the point-of-care. Developed in collaboration with IBM, the medical information management offering can manage up to 10 million medical images, helping health care practitioners provide better patient care with greater efficiency and at reduced costs. The integrated platform allows users to manage and view clinical images originating from different treatments and providers to bring secure, consistent image management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a general medical and surgical hospital in Visalia, Calif., needed to reduce its operational costs while increasing storage space. To meet these demands, KDHCD tapped IBM's storage systems to create a new storage platform that reallocates resources and saves a significant amount of data space with thin-provisioning technology. Virtualization creates a smaller hardware footprint so the hospital also saved on power and cooling costs. KDHCD now has a consolidated storage environment that provides the scalability, ease-of-management, and security to support critical healthcare data management for the hospital.
Amalore Jude 270003DGKQ firstname.lastname@example.org Tags:  tivoli-storage-productivi... storage-blog storage-resource-manageme... srm tpc 1,884 Visits
Often data center managers find it difficult to accommodate data growth, while maintaining high levels of storage service and availability. In addition to these challenges, new IT initiatives such as virtualization and cloud services introduce additional complexity to already stressed out administrative staff.environment.
IBM's Integrated Service Management solutions can help organizations realize the full potential of their business by providing a holistic approach to delivering and managing IT services. Specifically, IBM Tivoli Storage Productivity Center is designed to equip today’s IT organizations with critical capabilities for visibility, control and automation in the storage
Download and read the latest white paper "Gain visibility, control and automation in your storage environment"
Survey of IT Decision Makers Sheds Light on Need for a New Class of Storage
Late last year, IBM issued survey results that shed light on the storage spending priorities and organizational needs for the near future. Conducted by Zogby International on behalf of IBM, the survey of 255 IT professionals in decision-making positions showed that the majority of respondents (57 percent) agree their organization needs to develop a new storage approach to manage future growth.
The survey underscores the need for a new class of storage that can expand the market for solid-state drives (SSDs) by combining their ability to speed the delivery of data with lower costs and other benefits. Nearly half (43 percent) of IT decision makers say they have plans to use SSD technology in the future or are already using it. Speeding delivery of data was the motivation behind 75 percent of respondents who plan to use or already use SSD technology. However, the major factor for not using SSD was cost, according to 71 percent of respondents.
To address this issue, IBM Research has been investigating a potential in solid-state breakthrough called “Racetrack memory” that could someday access data significantly faster than hard-disk drives—at the same low cost—and be a successor to flash in handheld devices.
· Nearly half (43 percent) say they are concerned about managing Big Data.
· Nearly half (48 percent) say they plan on increasing storage investments in the area of virtualization, cloud (26 percent) and flash memory/solid state (24 percent) and analytics (22 percent).
· More than a third (38 percent) said their organization’s storage needs are growing primarily to drive business value from data. Adhering to government compliance and regulations that require organizations to store more data for longer -- sometimes up to a decade -- was also a leading factor (29 percent).
· About a third of all respondents (32 percent) say they either plan to switch to more cloud storage in the future or currently use cloud storage.
Organizations are faced with an increasing challenge of storing, analyzing, and protecting ever-expanding data sets that hold significant business value, driving the need for radical new approaches to storage fueled by innovation. Cloud computing, analytics and more advanced storage management technologies will be critical to tapping into that data and turning it into intelligence.
Focused on developing disruptive innovation and pushing the boundaries of data exploration and utilization, IBM Research drives new approaches to managing data, including storage for cloud systems that are geographically dispersed, adding autonomic behavior to storage systems, creating archival systems that prevent a “digital dark age,” and optimizing storage for analytics.
Maria Huntalas 1200007VFS email@example.com Tags:  data-reduction restore deduplication data-protection backup disaster-recovery business-continuity replication compliance storage-blog service-management unified-recovery-manageme... recovery retention archive risk-management 1,365 Visits
In response to: Enabling TSM Unified Recovery Management ReplicationWant to learn more about how HyperIP can help accelerate your data transfer by as much as 12x? Join NetEx and IBM Tivoli Storage Software for a webinar on Jan. 25, 1PM EST to hear all about how pairing Tivoli Storage Manager 6.3 with NetEx HyperIP can help you achieve this! Register here: http://bit.ly/xQFHdm
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  service-management risk-management storage-blog restore replication unified-recovery-manageme... data-reduction archive data-protection retention backup recovery deduplication business-continuity disaster-recovery compliance 1 Comment 4,429 Visits
In the IBM Thought Leadership Whitepaper,
Richard Vining 2700019R2A email@example.com Tags:  recovery data-reduction data-protection business-continuity backup restore replication service-management unified-recovery-manageme... retention storage-blog deduplication compliance disaster-recovery archive risk-management 3,313 Visits
In October 2011, IBM added native replication of backup data in Tivoli Storage Manager Extended Edition v6.3 to help customers add "warm standby" disaster recovery capabilities to their unified recovery management platform. This is a powerful new feature that can help reduce the costs of maintaining a separate DR point solution, and simplifies the overall management of the environment.
However, when moving data between physical locations, especially over the long distances desired for a true disaster recovery solution, network latency can become a significant issue. TSM replication is extremely efficient, in that it sends only incremental, deduplicated data between sites. But transfer times can still be impacted by network latency over long distances.
To overcome this problem and provide near native transmission speeds, WAN acceleration solutions such as Netex HyperIP can be deployed.
Netex recently completed testing of their solution with the new TSM replication feature and found that it can accelerate data transfer by as much as 6 times, or 12 times with HyperIP’s block-level compression. To learn more, please visit http://www.netex.com/blog/?p=206
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  backup risk-management compliance disaster-recovery archive data-protection unified-recovery-manageme... business-continuity storage-blog deduplication restore retention tivoli data-reduction replication service-management recovery 1,965 Visits
I know. I’m sinking pretty low when I borrow a line from an animated gecko. But as I keep thinking that data backup and restore systems are very much like automobile insurance, I just can’t resist.
Think about it – what value do you get from paying for auto insurance, other than the peace of mind that should some fool run into you, you’ll be able to get back on the road in a reasonable amount of time and at a reasonable expense? The same is true with data backup: on its own, it offers little value while costing a lot of time and money, but you had better have one when something / anything goes wrong.
As with your auto insurance, you want to pay as little for backup/restore as possible, while meeting your service level objectives. There are choices to be made that impact your costs and your recovery capabilities – does your policy include towing, collision repair, or the use of a rental car while yours in the shop? And what is the out-of-pocket deductible you have to pay per accident?
Same thing with backup – which data do you protect, how often do you perform backup, how many versions and copies do you keep, how long do you keep them, where do you distribute them, how fast do you need to restore? All of these service level considerations can impact your costs.
At IBM, we recognize that on the one hand, your business requires the most advanced, reliable and scalable data protection solutions for your applications and data; and on the other hand, the investments in these solutions are nothing more than insurance – they don’t contribute to the top line, and they only contribute to the bottom line when they are called upon to recover operations following a data loss disaster.
We are helping our customers meet these conflicting challenges through an evolution of continuous improvements to our data protection and recovery software, led by Tivoli Storage Manager, that can dramatically improve your business continuity service levels while reducing your costs even more dramatically.
To learn how you may be able cut the costs of your backup environment by 50% or more, please invest 15 minutes reading our new whitepaper, Ten Ways to Save Money with Tivoli Storage Manager.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Delbert Hoobler 1000008PR6 email@example.com Tags:  tivoli data-backup data-protection storage-management storage storage-software tivoli-storage-manager tivoli-storage flashcopy-manager 1,811 Visits
IBM is looking for customers and business partners who are interested in participating in an Early Access Program (EAP)/Beta Program for an upcoming release of FlashCopy Manager, Data Protection for SQL, and Data Protection for Exchange. If you would like to nominate your organization to participate in this EAP/Beta, please send an email to:
Mary Anne Filosa (firstname.lastname@example.org)
and be sure to include your organization's name. Once your email is received you will be sent instructions on signing off on the EAP/Beta legal form online and when that signoff has been completed, you will be sent a link to the program's nomination site. We encourage you to respond quickly if you are interested as the program begins in mid December.
Martha Westphal 0600012U29 email@example.com Tags:  storage tivoli tuc ibm management tpc webcast 1,445 Visits
Live Webcast: Using Tivoli Storage Productivity Center to be the "eyes" into your SAN environment, and to see how that environment is changing. LIVE!
In the ever changing SAN environment, Tivoli Storage Productivity Center has many components to help the Storage Administrator know when a where to focus their attention. We will walk through many of these in a live demo and see how they can be used.
Let TPC help you keep up with storage growth instead of working longer hours!
Speaker:Scott McPeek, IBM Program Director, Storage Sales Enablement. He has worked in the software industry more than 30 years, the last ten years have been with IBM as part of the TrelliSoft SRM acquisition. Scott now focuses on storage resource management, storage performance management and virtualization with products like TPC, SVC and the Storwize V7000.
Register for this Live Webcast here
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  storage pulse tivoli-storage ibm 2012 management 1,526 Visits
How are you spending your time this weekend? Polishing up your Pulse 2012 storage session abstract, hopefully! With only 4 days left to submit a 100-word abstract by Nov. 7, we thought it would be helpful to share some final pointers. Keep in mind that this year's theme is Business Without Limits and we are seeking to understand how you gained visibility, control and automation to deliver better business outcomes.
What are the key benefits to you as a Speaker? One full Pulse conference pass ($1995 value) and the opportunity to gain visibility for your company, and take advantage of an incredible networking opportunity with over 7,000 industry experts, press, and analysts.
Here's some pointers on how to get your Storage Management session abstract accepted:
1. Focus it on topics such as how you used Tivoli Storage Manager to manage "big data"; success with recent upgrades; or cloud storage
2. Tell us about the key business challenges you were trying to solve, and how IBM Tivoli storage solutions helped you address these challenges
3. What was the ROI, or key results, from implementing a Tivoli storage solution, and what valuable lessons did you learn from the experience
Don't forget to register during early
Well it's that time again, hard to believe, I know...PULSE call for papers has opened, and we want to have another banner year in the Tivoli Storage Sessions! Last year we were standing room only in many of our sessions and this year we hope to fill each room once again.
As for topic suggestions, we'd like to hear from customers who:
It's simple, just go to this link and submit a 100-word abstract. The deadline is November 7th, so there's no time like the present!
Speaker Benefits include:
Your abstract should include:
Amalore Jude 270003DGKQ email@example.com Tags:  storage-resource-manageme... storage-software srm storage-management tpc tivoli-storage-productivi... ibm-tpc 1,453 Visits
In my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray – I briefly touched upon ‘storage tiering reports’. Now these reports are available as part of Tivoli Storage Productivity Center v4.2.2 announcement this week. In one of the latest Storage Wave studies by The InfoPro, it points out to ‘Tiered Storage Build Out’ as one of the top 3 initiatives among storage managers. Yet in a complex, virtualized environment, having complete visibility and control over storage tiering can be challenging.
• Are the backend subsystems optimally utilized
• Does moving a certain workload to low cost storage impact service levels
• How to level out performance in a certain pool
• Which data groups can be moved to an alternate tier of storage
Image: Sample tiering distribution report
To read more about Tivoli Storage Productivity Center, click here.
IBM announces Tivoli Storage Productivity Center Select - a comprehensive storage management software that offers advanced provisioning, performance management, capacity optimization and reporting capabilities. Select includes all key capabilities of Basic Edition, Disk and Data modules of Tivoli Storage Productivity Center family, and is conveniently packaged for ‘per enclosure’ licensing.
Learn more about Select. Download Select data sheet.
Announcing a Beta Program for the Next Release of Tivoli Storage Manager (TSM) FastBack for Workstations
Omar Vargas 270003BXXK firstname.lastname@example.org Tags:  fastback-for-workstations cdp real-time-backup tsm continuous-data-protectio... backup-recovery 1,576 Visits
TSM FastBack for Workstations is a centrally-managed solution that reduces the risks of losing important information stored on thousands of personal computers across an entire enterprise, as described here:
IBM will be running a beta program for the next release of this product, providing those taking part with early access to the latest planned enhancements. If you would like to participate, please contact the beta coordinator, Matthew Boult (email@example.com).
Martha Westphal 0600012U29 firstname.lastname@example.org Tags:  planning storage unified tivoli tsm ibm recovery management capacity 2,137 Visits
NEW!! Technical Services Webinar: Capacity Planning in a Tivoli Storage Manager Environment
As much as customers would like to "backup everything and keep it forever", storage is not unlimited. The reality of ever increasing data growth, combined with regulatory compliance and the associated risks make the arduous task of capacity planning for backup ever more critical. A new Reporting and Monitoring tool is available with Tivoli Storage Manager (TSM). This new tool, based on IBM Tivoli Monitoring, can collect and report on historical data and is an integral part of a capacity planning regimen.
This session will demonstrate a capacity planning methodology that conforms to the ITIL Capacity Planning process description by showing how the TSM Reporting and Monitoring tool and other TSM components can be utilized for to ease the pain of capacity planning. Additionally, this session will look at strategies, like data deduplication, to reduce the amount of backup data while maintaining regulatory compliance.
Presenters: Mark Vanderboll, IBM Tivoli Global Response Team
Dave Daun, IBM Advanced Technical Skills
Access the webinar here: http://bit.ly/qdOuJU
Delbert Hoobler 1000008PR6 email@example.com Tags:  recovery recovery-management flashcopy-manager data-protection data-recovery data-backup tivoli tivoli-storage 1,864 Visits
Did you see that we announced a new version of Tivoli Storage FlashCopy Manager!
Here are the highlights of IBM® Tivoli® Storage FlashCopy® Manager V3.1:
It will be generally available on October 21, 2011.
Check out the full announcement here:
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  service-management storage-blog unified-recovery-manageme... restore recovery risk-management data-reduction disaster-recovery archive compliance data-protection business-continuity retention deduplication tivoli backup 2 Comments 5,522 Visits
Check out the NEW Tivoli Storage Manager v6.3
You should expect more from your storage, and from your storage vendor. On October 11 and 12, IBM is announcing a broad range of new and enhanced storage products that help to meet this challenge.
Included are significant updates to the Tivoli Storage Manager (TSM) family. TSM is already the data protection software leader in scalability, functionality, data reduction, performance and reliability. The v6.3 release will keep us ahead of the competition, and will help to keep you ahead of the challenges you’re facing.
Struggling with data growth? No problem.
The scalability of TSM is being doubled for the 3rd year in a row, now supporting up to 4 billion data objects in a single TSM Server. In 2008, the internal database limit was 500 million files, so we’ve made an 8X improvement since then. That means fewer backup servers are needed. And remember that TSM is a single server architecture; we don’t add “media servers” to provide scale.
Unified Recovery Management now includes Replication for faster Disaster Recovery
We’ve been working to simplify the data protection and recovery infrastructure by unifying the management of all the different tools you need for different applications, operating systems, data locations, and data loss causes. In the release of Tivoli Storage Manager Extended Edition v6.3, we’re adding client data and metadata replication to an off-site TSM Server. This provides a “hot standby” disaster recovery capability, managed from within the TSM Admin Center. The replication is asynchronous and can be scheduled on a per client basis to minimize the impact on network bandwidth. And it can be configured in a classic source-to-target model as well as between two active sources, many-to-one, or in a “round robin” architecture.
Simplifying the administrator’s life
One of the painful tasks an administrator has to do, especially in large environments, is patching the backup/archive client software on protected systems. With this release, we’re adding the ability to automatically push out client software updates across AIX, HP-UX, Linux and Windows systems (Windows push support was actually introduced last year). This new capability should reduce the time needed to perform an update by at least 80%.
Improved integration with VMware
Tivoli Storage Manager for Virtual Environments v6.3, our non-disruptive off-host solution for VMware virtualized servers, now supports VMware vSphere 5 and includes a plug-in for vCenter to easily manage TSM backup and restore operations from within the VMware environment. Tivoli Storage FlashCopy Manager v3.1 is also being released with VMware vStorage APIs for Data Protection integration and the vCenter plug-in to provide hardware-assisted application-aware snapshot management. Support for DB2, Oracle and SAP databases on HP-UX is also added in the new FlashCopy Manager release.
Something BIG for mainframe customers
Tivoli Storage Manager for z/OS Media v6.3 is a new connector that enables customers to leverage their mainframe-attached FICON storage devices for storing TSM data. This offering won’t increase the licensing costs for existing customers that move their TSM v5.x Server software from z/OS and install TSM v6.3 Server on an AIX system or a Linux on z partition, and gives them access to all of the cost-saving improvements made in TSM over the past 3 years.
The new standard in licensing simplicity
In June we announced the availability of the Tivoli Storage Manager Suite for Unified Recovery. This bundle of ten TSM and FastBack products is simply licensed by the amount of data being managed within the TSM environment, first copy only. We have seen outstanding results from this new offering, from both new and existing customers. The reason is simple: you want to use the right tool for each data protection job, but you don’t want to have to worry about acquiring and managing individual product licenses for each one. This is especially true in virtual server and cloud environments. Added benefit: our broad range of built-in data reduction technologies can dramatically reduce the costs of this offering.
Tivoli Storage Manager Suite for Unified Recovery v6.3 is also being announced, and includes all of the TSM and TSM for Virtual Environments enhancements noted above.
Many other improvements are being introduced across the family, including: better reporting and monitoring; better scalability for Microsoft Windows, Exchange and SQL Server; faster internal processes such as database backup; etc. SAP customers using TSM for Enterprise Resource Planning v6.3 can now do incremental backups on Oracle RMAN.
For more information on the Tivoli Storage Manager enhancements, please refer to the announcement letter on ibm.com (link)
To learn more about all the new IBM Storage announcements, please click here (live 12 Oct.)
Also, please check out the new whitepaper, “Ten ways to save money with Tivoli Storage Manager”
Amalore Jude 270003DGKQ email@example.com Tags:  tpc storage-hypervisor srm storage-blog storage-management storage-resource-manageme... tivoli-storage-productivi... ibm-tpc 1,452 Visits
In continuation to my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray, I would like to go ahead and call out some of the advanced capabilities that Tivoli Storage Productivity Center offers.
Ron’s recent post on choosing the right storage hypervisor points out to ‘comprehensive performance monitoring’ as one of the key capabilities you need to successfully deploy cloud storage. This thought reinforces the need for sophisticated tools that can help you significantly reduce the burden on storage configuring (think of best practices) and performance monitoring.
Bottle neck analysis
It’s no longer the network administrator – when the system response is poor, it’s the storage administrator who gets the call. Especially in a virtualized environment, it is essential to have performance monitoring tools that provide a quick yet comprehensive view of the data path – to ascertain any bottlenecks. With Tivoli Storage Productivity Center, you will be able to see where the bottlenecks have occurred, for example one storage subsystem may be over utilized while another is underutilized.
Data Path Explorer offers detailed view of all the storage entities and their connectivity. It provides you performance information across the entire data path – from host to array – and allows to drill down and view performance metrics at the port-level. Standard Edition, the advanced module within Tivoli Storage Productivity Center, offers advanced reporting capabilities on bottleneck analysis.
According to a storage manager at a leading medical university, “With Tivoli Storage Productivity Center, I can quickly determine if there exists a bottleneck in the SAN infrastructure. Earlier it could take me days or sometimes weeks to figure that out. Now, I can do it in minutes”.
If you have recently deployed Tivoli Storage Productivity Center, make use of IBM’s Value Pack service offerings, which provide analysis of disk subsystem performance bottlenecks using native product tools. Talk to your IBM sales representative or IBM business partner for more information.
Configure your SAN the best practices way
I touched upon the SAN Planner topic briefly in my previous post – and would like to delve little deeper in this one. As mentioned earlier, SAN Planner is a wizard-based tool that assists storage administrators in end-to-end planning involving all storage components and related networks. SAN Planner helps implement best practices pertaining to replication relationships; it utilizes current and historical performance metrics to recommend the best configuration while commissioning new storage systems.
There are three planners associated with recommending storage configuration changes, which are based on current workloads, capacity utilization and best practices:Volume Planner – helps administrators in provisioning storage based on capacity, compression, RAID levels, etc. It includes replication planning as well, supporting sessions such as Metro Mirror, Global Mirror and FlashCopy.
Zone Planner – provides zoning and LUN masking configuration support.
Path Planner – assists in planning and implementing storage provisioning for hosts and storage systems with mutilpath support in fabrics.
All the three planners can be invoked separately or together in an integrated manner from Tivoli Storage Productivity Center console. Learn more about these planners and their capabilities: download the latest Redbooks.
As you can see, configuring SAN with Tivoli Storage Productivity Center is a child’s play, isn’t it? But can you check whether current SAN configuration conforms to best practices? Yes you can!
SAN Configuration Analyzer provides end-to-end check for configuration policies, ensures the correctness of storage network configurations, such as zoning, multipathing and replication. In addition, the tool sends alerts to administrators when the best practices are violated.
Storage networks are undergoing significant changes more often to accommodate changes in business policies and the ever growing data. Administrators are challenged to track configuration changes for problem determination, change management or auditing purposes. Tivoli Storage Productivity Center offers SAN Configuration History Viewer to provide a historic view of changes and eliminate time gap in determining problem areas associated with configuration changes.
To learn more about the IBM Tivoli Storage Productivity Center Select Series, contact your IBM sales representative or IBM Business Partner, or visit ibm.com.
Click here to join the virtual dialogue on Storage Hypervisor; share your thoughts and concerns in our group chat on October 7, 2011 from 12 noon to 1pm Eastern Time. You can log in now for a preview of topics.
Martha Westphal 0600012U29 firstname.lastname@example.org Tags:  self-service cloud-storage storage pay-per-use virtual-storage service-catalog storage-hypervisor 942 Visits
And don't forget to listen to the 'open mic' conversation about Storage Hypervisors with IBM's Ron Riffe, the author of this blog series, and ESG analyst, Mark Peters:
Ron Riffe 100000EXC7 email@example.com Tags:  usage-and-accounting-mana... svc cloud-storage tpc flashcopy-manager 4,419 Visits
This is part 3 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. Part II explained how a storage service catalog, self-service provisioning, and usage-based chargeback can be used to drive down cost. In this 3rd post, I’m going to share some thoughts that should help you be smarter about choosing a storage hypervisor.
The first step is to remind ourselves what we’re trying to accomplish with a storage hypervisor. From our experience deploying over 7000 storage hypervisors, the starting point for most folks is improved efficiency and data mobility. Remember, the basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, people are looking for the freedom to move a slice (or virtual volume) from tier to tier, from vendor to vendor, and more recently, from site to site all while the applications are online and accessing the data.
To pull off this level of mobility – in servers or storage – it’s important that the hypervisor not be dependant on the underlying physical hardware for anything except capacity (compute capacity in the case of a server hypervisor like VMware, storage capacity in the case of a storage hypervisor). Think about it… Wouldn’t it be odd to have a pair of VMware ESX hosts in a cluster, one running on IBM hardware and one on HP hardware, and be told that you couldn’t vMotion a virtual machine between the two because some feature of your virtual machine would just stop working? If you tie a virtual machine to a specific piece of hardware in order to take advantage of the function in that hardware, it sort of defeats the whole point of mobility. The same thing applies to storage hypervisors. Virtual volumes that are dependant on a particular physical disk array for some function, say mirroring or snapshotting for example, aren’t really mobile from tier to tier or vendor to vendor any more.
But it’s more than just a philosophical issue, there’s real money at stake (you may want to read what comes next a couple of times). In Part II of this post I discussed using a storage service catalog as a means of defining specific service level needs for your different categories of data. These service levels covered the gamut from capacity efficiency and I/O performance (for you techies that’s RAID levels, thin provisioning, use of solid state disks, etc), to data access resilience and disaster protection (multi-pathing, snapshotting, mirroring…). The reason so many datacenters have an over abundance of tier-1 disk arrays on the floor is because, historically, if you wanted to take advantage of things like thin provisioning, application-integrated snapshot, robust mirroring for disaster recovery, high performance for database workloads, access to solid-state disk, etc… you had to buy tier-1 ‘array capacity’ to get access to these tier-1 ‘storage services’ (did you catch the subtle difference?) Now, I don’t have anything against tier-1 disk arrays (my company sells a really good one). In fact, they have a great reputation for availability (a lot of the bulk in these units are sophisticated, redundant electronics that keep the thing available all the time). But with a good storage hypervisor, tier-1 ‘storage services’ are no longer tied to tier-1 ‘array capacity’ because the service levels are provided by the hypervisor. Capacity…is capacity…and you can choose any kind you want. Many clients we work with are discovering the huge cost savings that can be realized by continuing to deliver tier-1 service (from the hypervisor), only doing it on lower-tier disk arrays. As I noted in Part II of this post, we’ve seen clients shift their mix of ‘array capacity’ from 70% tier-1 to 70% lower-tier arrays while continuing to deliver tier-1 ‘storage services’ to their data. This YouTube video describes an example of that at Sprint.
Smart idea #1: Be careful to understand what, if any, dependency a storage hypervisor has on the capability of an underlying disk array to deliver function to your virtual volumes (like thin provisioning, compression, snapshotting, mirroring, etc.)
Next thought. There are three rather interrelated solution categories in the area of dealing with outages and protecting data.
Smart idea #2: Remembering smart idea #1, be sure that your storage hypervisor has its own ability to provide for disaster avoidance (inter-site mobility), disaster recovery (mirroring that’s integrated with recovery automation tools) and data protection (snapshotting that’s integrated with applications and backup tools).
One final thought. A storage hypervisor isn’t an island unto itself. Like a server hypervisor, it exists in a broader datacenter. Administrators need to be able to see it in the context of the disk arrays it manages, the servers (or virtual machines) that use its virtual volumes, the applications that need backups or clones, the disaster recovery automation that’s coordinating recovery of servers, storage, networks… You get the picture. When the challenges of day-to-day operations happen (and they do happen most every day)…
Smart idea #3: Make a list of all the day-to-day operational things you do today with physical storage, and the things you hope to automate in the future, and be careful to understand if your storage hypervisor is sufficiently instrumented and integrated – or if it’s creating a new island to be separately managed.
I had an opportunity recently to talk on an open microphone podcast with ESG senior analyst Mark Peters about this whole area of storage hypervisors. It was an enlightening conversation full of very actionable thoughts. I recommend listening to the podcast.
And now a word from our sponsors :-) IBM offers the worlds most widely deployed storage hypervisor. With over 7000 deployments, hundreds in the newer inter-site disaster avoidance configuration, we’ve had a lot of opportunity to build some depth. As you evaluate using cloud storage techniques in your private enterprise, you’ll find things I talked about in this blog series available in IBM products today. They can help you save your company a pile of money (and make you look like a genius while you’re doing it).
Storage hypervisor platform: IBM System Storage SAN Volume Controller (SVC)
Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition (TPC SE)
Usage-based chargeback: Tivoli Usage and Accounting Manager (TUAM)
Data protection integration: Tivoli Storage FlashCopy Manager and Tivoli Storage Manager Unified Recovery
Thanks for staying with me through this blog series – hope you find it valuable!
The conversation continues!
Amalore Jude 270003DGKQ firstname.lastname@example.org Tags:  storage-resource-manageme... tivoli-storage-productivi... tpc srm ibm-tpc 1,572 Visits
While storage on cloud is a promising thought, the ensuing complexity associated with monitoring, managing and reporting the storage sprawl acts as a significant deterrent. Organizations need to equip their environment with robust storage resource management application that can withstand business demands, offer comprehensive visibility across the data path, and scale as their storage network expands. Fellow IBMer Ron Riffe in his recent blog on Storage Hypervisor wrote about the importance of management controls and highlighted some of the key capabilities of IBM’s storage resource management offering - Tivoli Storage Productivity Center.
Tivoli Storage Productivity Center supports both IBM and other-vendor storage devices that are compliant with SMI-S standards, and offers integrated storage infrastructure management capabilities that simplifies, automates, and optimizes the management of storage devices, storage networks, and capacity utilization of file systems and databases.
I am going to highlight the 4 key capabilities administrators would require should they pursue to put their storage on cloud…
Self service provisioningTivoli Storage Productivity Center enables automated discovery and wizard-based provisioning of storage systems, enabling administrators to effectively provision new storage through best-practice methods, often including disaster recovery planning while provisioning.
Tivoli Storage Productivity Center offers the SAN Planner that assists the administrator in end-to-end planning involving fabrics, hosts, storage controllers, storage pools, volumes, paths, ports, zones, zone sets, storage resource groups (SRGs), and replication The SAN Planner provides recommendations for creating and provisioning VDisks with the recommended I/O group and preferred node for each VDisk.
Combining Tivoli Storage Productivity Center with Tivoli Provisioning Manager, storage administrators have a powerful way to simplify the provisioning of storage. Automated work flows can be created that can utilize custom scripts and customer processes, including storage administrator and/or systems administrator sign off. The Tivoli Storage Productivity Center-Tivoli Provisioning Manager native integration enables storage administrators to allow application owners to procure and provision the storage space they need, the type of storage they need, the throughput that is expected and the price specifications to suit their business priorities.
To read more about SAN Planner, click here.
Storage tiering reports (to be announced on Oct 14, 2011)Storage tiering reports was developed by IBM Systems Lab Services under the larger theme known as STAR – Storage Tiering Activity Reporter, which provides decision support for data placement, and enables administrators to optimize resource utilization in terms of capacity and throughput.
Storage tiering reports help administrators to leverage storage virtualization and insights from Tivoli Storage Productivity Center to
• Identify data that could be moved to an alternate tier of storage or less active Managed Disk Group
• Identify the hottest and coolest Managed Disk Groups and Virtual Disks based on performance to assist in up tiering, down tiering and re-tiering decisions
• Provides capability to make “proactive” volume placement decisions
• Understand the performance stress on the hardware in comparison to its capability
A large European Telecommunications giant benefited from a 45% decline in storage acquisition cost in the first deployment translating into 55% discounted cash flow saving for a 4 year TCO evaluation.
Ensuring storage service levelsContinuous performance monitoring and reporting is key to business continuity and maintaining service levels in a cloud environment. Tivoli Storage Productivity Center provides end-to-end visibility to administrators from a single management console, including detailed performance metrics, data path and system connectivity, impact analysis and historical trending.
Administrators can ensure storage devices, storage networks and attached devices are performing in an optimized state, by setting different levels of thresholds for different storage entities based on the criticality of the asset. Alerts are generated when these thresholds are exceeded, duly notifying administers of potential impact and downtime. Tivoli Storage Productivity Center also offers policy-based event action that is based on performance values and business policies.
Tivoli Storage Productivity Center provides storage utilization insights from raw performance data against proven models to predict the utilization of components within the subsystem. This feature provides a unique “heat map” style of display that makes it easier for administrators to narrow in on storage “hot spots”, and thus more easily identify risks and discover both under- and over-utilized areas of the storage infrastructure.
To learn more about Storage Optimizer, click here.
ChargebackIn a cloud environment, administrators are challenges to create and manage shared storage services that can be charged back to users based on consumption. When usage varies between departments or businesses, storage administrators require chargeback capabilities in order to simplify departmental allocation and manage capacity utilization.
Through integration with Tivoli Usage and Accounting Manager, Tivoli Storage Productivity Center for Data enables storage administrators to understand storage usage and perform cost allocation or chargeback users of storage. These solutions help support storage administrators in their efforts to track, allocate and bill different departments and lines of business based on multiple usage metrics. As a result, the organization can better align the storage infrastructure with overall business objectives and be better prepared to meet future requirements.
Tivoli Storage Productivity Center for Data supports multi-tenancy requirements, allowing cloud storage administrators to manage and track storage usage for multiple clients simultaneously. Advanced multi-customer support capabilities enable organizations to charge in different currencies and to charge different rates for the same service, as well as providing cloud consumers with price breakdowns for resources used and resources reserved for future use. The solution supports large data centers as well as public and hybrid cloud environments.
Click here to download the white paper ‘Optimizing capacity and management of file systems and databases’.
Ron Riffe 100000EXC7 email@example.com Tags:  pay-per-use self-service service-catalog virtual-storage storage cloud-storage storage-hypervisor 1 Comment 5,274 Visits
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use. It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
Stay tuned for Part III of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
The conversation is building! Earlier this week, fellow IBM blogger Tony Pearson joined the conversation with a perspective on Storage Hypervisor integration with VMware. And IBM blogger Rich Vining added a perspective on VMware Data Protection with a Storage Hypervisor. To cap it off, we just completed our first live group chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
Join the conversation! The virtual dialogue on this topic will continue in another live group chat on September 30, 2011 from 12 noon to 1pm Eastern Time.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  productivity vadp vsphere backup protection recovery volume controller cloud manager hypervisor vm san tivoli center restore vcenter storage unified data 3,713 Visits
I recently read an excellent post by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Efficient storage: Here’s a scenario we see…
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Storage hypervisor platform: IBM System Storage SAN Volume Controller (SVC)
Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition (TPC SE)
Data protection integration: Tivoli Storage FlashCopy Manager and Tivoli Storage Manager Suite for Unified Recovery
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Come Listen to Tivoli Storage: Simplify Data Protection and Reduce Costs with Unified Recovery Management
Martha Westphal 0600012U29 email@example.com Tags:  ibmsoftware ibmstorage tsm storage management tivoli 1,489 Visits
Simplify Data Protection and Reduce Costs With Unified Recovery Management
On September 22, we will be hosting an educational webcast that will address the challenges of providing data protection and recovery for rapidly growing amounts of diverse enterprise data. During this call, you will hear about our unified recovery management solution that can help reduce complexity, risk and costs. Included in this solution is a new simple, value-based option for procuring and managing software licenses.
Speaker: Rich Vining, Product Marketing Manager
Date: September 22, 2011
Time: 11:00 AM Eastern US
Please register for this event using this link.
After registering you will receive a confirmation note with call-in instructions.
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  virtual-storage storage-hypervisor storage vmotion vmware cloud-storage live-partition-mobility powervm 1 Comment 5,361 Visits
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Maria Huntalas 1200007VFS email@example.com Tags:  storage hypervisor unified-recovery-manageme... storage-cloud tivoli-storage tivoli-storage-manager 1,295 Visits
The Global Tivoli User Community is hosting an all-day, Storage Focused Tivoli User Group event in New York City October 11th that will cover:
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  tivoli tivoli-storage champion ibm 869 Visits
IBM Champion program is still accepting nominations for experts on IBM Tivoli Software- Nominations are open through August 19.
The IBM Champion program recognizes exceptional contributors to the technical community -- clients and partners who work alongside IBM to build solutions for a smarter planet. An IBM Champion is an individual who leads and mentors his or her peers and motivates them toward IBM solutions and services. Champions can be found running user groups, managing websites, speaking at conferences, answering questions in online forums, writing blogs, submitting wiki articles, sharing how-to videos, and writing technical books.
The IBM Champion program recognizes and thanks these innovative thought leaders, amplifying their voice and increasing their sphere of influence in the technical community. IBMers, partners and clients are encouraged to submit nominations through August 13th. To learn more and to submit your nominations, go to: IBM Champion homepage.
Amalore Jude 270003DGKQ email@example.com Tags:  storage-hypervisor storage-software tivoli-storage-productivi... tpc storage-management srm storage-resource-manageme... storage-blog 1,183 Visits
IT managers are broadly exploiting virtual server infrastructures -- hypervisors -- to improve efficiency, provide for transparent mobility, and give common manageability and capabilities regardless of type of server hardware being used. These same robust benefits are now available for virtual storage infrastructures with the IBM storage hypervisor (System Storage SAN Volume Controller and its management console the Tivoli Storage Productivity Center).
Listen to the webcast to understand how the IBM storage hypervisor can be a complimentary next step in the overall IT environment virtualization process.
Click and learn more about IBM System Storage SAN Volume Controller and IBM Tivoli Storage Productivity Center. Please reach out to IBM sales or IBM business partners to understand how IBM storage hypervisor solution can benefit your organization's effort to virtualize and efficiently manage storage.
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  information-archive information-retention deduplication regulatory-compliance archiving data-retention data-protection smart-archive 948 Visits
Is your archived information earning its keep? Are explosive data growth, regulatory compliance, legal discovery, and data protection on your mind? Drivers like these demand long-term, high-volume data retention. Join IBM and IDC on June 8, 2011 at 11AM EST for an informative webinar on how to "Get more from your archived information."
Laura DuBois, IDC Program Vice President, Storage Software, and Craig Butler, IBM Business Line Executive for Storage Archive, Data Protection & Retention, will address a smarter approach to archiving. Find out how companies like yours use the IBM Smart Archive strategy and lead offerings to help ensure that relevant information is properly retained and protected throughout its life cycle. As part of the Smart Archive strategy, IBM offers IBM Information Archive: a streamlined, flexible archiving solution that helps organizations of practically all sizes address their information retention needs - whether business, legal or regulatory.
Register now for this webinar on June 8, 2011!
Richard Vining 2700019R2A email@example.com Tags:  backup business-continuity recovery restore data-reduction deduplication service-management data-protection retention risk-management unified-recovery-manageme... disaster-recovery archive storage-blog compliance 3,440 Visits
On May 31 2011, IBM announced the availability of Tivoli Storage Manager Suite for Unified Recovery, a bundle of ten Tivoli Storage Manager and Tivoli Storage Manager FastBack products, with an easy to order, deploy and manage capacity-based pricing model.
With this offering, you can deploy any of ten different solution components, in any location and quantity to meet the unique data protection and recovery needs across a wide range of systems, applications and service level requirements. The pricing is based on a tiered Terabyte metric that measures the amount of data managed in Tivoli Storage Manager primary storage pools and FastBack repositories. License costs can be dramatically reduced through the use of built-in source and target data deduplication, and there is no charge for duplicate copies of the data that may be used for disaster recovery, testing, data mining and other purposes.
The individual products included in this comprehensive package are:
Tivoli Storage Manager Extended Edition : Provides core backup/restore for a wide range of operating systems; broad support for tiers of storage; NDMP, IBM DB2® and Informix® support; source and target deduplication; advanced disaster recovery planning; and much more.
Tivoli Storage Manager for Databases : Performs online, consistent and centralized backups for Oracle and SQL to avoid downtime; protect vital enterprise data infrastructure and minimize operation costs.
Tivoli Storage Manager for Enterprise Resource Planning : Performs online, consistent and centralized backups for SAP environments.
Tivoli Storage Manager for Mail : Protects data on email servers running Lotus® Domino® or Microsoft® Exchange, with granular restore of Exchange email objects.
Tivoli Storage Manager for Virtual Environments : Automatically discovers and protects VMware virtual machines; offloads backup workloads to a centralized server and enables flexible, near-instant recovery.
Tivoli Storage Manager for Space Management : Moves inactive data to reclaim online disk space for important active data; frees administrators from manual file system pruning tasks; and defers the need to purchase additional disk storage.
Tivoli Storage Manager for Storage Area Networks : Provides high-performance backup/restore by removing data transfer from the LAN.
Tivoli Storage Manager FastBack : Provides efficient block-level incremental backup and near-instant restore for critical Microsoft Windows® and Linux® servers and applications, in the data center and in remote offices.
Tivoli Storage Manager FastBack for Microsoft Exchange : Provides the ability to recover individual Microsoft Exchange objects such as email, attachments, calendar entries, contacts and tasks.
Tivoli Storage Manager FastBack for Bare Machine Recovery : Provides operating system volume recovery following a disaster or catastrophic server failure, fully restoring Windows and Linux systems within an hour.
Existing customers of Tivoli Storage Manager are also able to take advantage of this new pricing model, which will eliminate the need to count Processor Value Units and help to gain better visibility and control of future licensing costs. For more information on converting to this model, please contact your IBM Sales Rep or Business Partner.
Later this week, I will post a list of the advanced capabilities that Tivoli Storage Manager Suite for Unified Recovery can bring to your IT environment with the overall benefits of reducing data growth, improving operational efficiency and dramatically reducing costs.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  bmr instant-restore tivoli cdp storage-blog backup disaster-recovery storage-management replication fastback 2,114 Visits
Tivoli will be performing a live demonstration of Tivoli Storage Manager FastBack and Tivoli Storage Manager FastBack for Exchange data protection products. These welcome additions to the Tivoli Storage Manager product family provide the ability to meet aggressive Recovery Point and Recovery Time Objectives in an organization's data protection service.
The TSM FastBack family provides many advanced features including:
This demonstration is open to Customers, Business Partners and IBM employees.
There are Web Conference and Audio Conference components to this demonstration.
Conference ID is 9663533
Prior to the web conference, we suggest you do the following:
1) go to www.sametimeunyte.com
2) click on Support
3) click on Lotus Sametime Unyte Meeting System Check
4) Select attendee type and click Next
5) Proceed with the system check and install any plug-ins required.
Toll Free: 888-426-6840
The scheduled dates are:
Amalore Jude 270003DGKQ email@example.com Tags:  tpc srm tpc-installation storage-blog tivoli-storage-productivi... 962 Visits
Now available! A video that showcases the ease of installing Tivoli Storage Productivity Center suite on to a Windows server. The video, created by Mike Griese, captures some key developments as part of version 4.2:
- No need for Common Agent Manager
- Fewer values to enter on setup screens
- Faster overall installation time
The video takes you through a step-by-step, wizard-based installation of Tivoli Storage Productivity Center, Tivoli Integrated Portal and Tivoli Storage Productivity Center for Replication.
Amalore Jude 270003DGKQ firstname.lastname@example.org Tags:  tpc storage-blog srm white-paper tivoli-storage-productivi... 1,466 Visits
Tivoli announces the availability of two white papers for Tivoli Storage Productivity Center.
To access more related white papers, click here.
Maria Huntalas 1200007VFS email@example.com Tags:  recovery business-continuity archive service-management server-virtualization tivoli retention tsm disaster-recovery data-protection backup data-reduction deduplication storage-blog risk-management virtualization restore tsm-virtual-environments vadp 1,328 Visits
Attention all Tivoli User Community Members! There will be free online training offered on May 26, 2011 from 9:30AM - 1:30PM EST on the topic of: Tivoli Storage Manager for Virtual Environments.
The following topics will be covered:
-Explain the different types of virtual machine backups (lecture)
-Explain the benefits of Tivoli Storage Manager for Virtual Environments (lecture)
-Perform full VM-level backups and restores (demo)
-Perform file-level backups and restores (demo)
-Install and configure Tivoli Storage Manager for Virtual Environments (demo)
-Perform Block-Level Backups with CBT (demo)
Topics were selected from the IBM Tivoli Storage Manager 6.2 for Virtual Environments workshop and tailored for a one-day online presentation.
Maria Huntalas 1200007VFS firstname.lastname@example.org Tags:  restore disaster-recovery recovery tsm-virtual-environments backup risk-management server-virtualization service-management archive vadp storage-blog deduplication virtualization data-protection business-continuity data-reduction tsm tivoli retention 1,494 Visits
Join us on May 17th at 1: p.m. Eastern / 10:00 a.m. Pacific........to hear all about Smarter Storage & Data Management for Virtual Server Environments
Featured Speakers: Richard Vining, IBM Tivoli Storage Product Marketing Manager & John Connor, IBM Tivoli Storage Product Manager
There is a huge transformation happening in IT organizations! These organizations are migrating from single-purpose physical servers to consolidated virtual machine technologies. The benefits of virtualization are many: cutting acquisition, management and facilities costs by reducing the number of physical machines; increasing service levels through faster server provisioning; and enabling new delivery models such as cloud services. However, virtualizing servers does not reduce the amount of data that is created and stored; actually it can have the opposite effect as virtual machines are moved or de-provisioned. This presentation will describe smarter ways of managing all this data and the infrastructure that stores it, and feature IBM Tivoli Storage Productivity Center family of products.