In the past two years that IBM acquired Butterfly, it generated hundreds of Analysis Engine Reports (AER) analyzing billions of gigabytes and established facts about Tivoli Storage Manager (TSM) that should make competition sit up and notice.
The Backup Analysis Engine report from Butterfly Software uses light-touch, agent-less software technology to analyze existing heterogeneous data backup environment. It is a non-intrusive analysis based on empirical production data collected in minutes and incorporated into the Backup Analysis Engine report from IBM Butterfly Software.
Why is Butterfly important? Gartner Magic Quadrant: Backup and Recovery 2013 Competitive analysis says between 2012 and 2016, one-third of organizations will change backup vendors due to frustration over cost, complexity and/or capability. To be able to say conclusively that TSM solution can save backup infrastructure costs by as much as 38% when compared to some of the other competitive products opens the door for IBM to go get these 33% of the organizations looking for a change.
AER is the Key More demand for AERs is expected with the launch of the automated “self-service” AER generation model. Scheduled to go live at the beginning of 2H 2014, it will scale out as a service to IBM and its Business Partners. These facts drive home the fact that Butterfly AERs have metamorphosed into a well accepted and standard approach to storage infrastructure analytics.
Meet the Butterfly Storage and Backup Assessment Team at Pulse 2014 If the butterfly flutter has caught your interest, visit Pulse 2014 on Feb 23-26 at Las Vegas and meet the folks who deliver Butterfly Storage and Backup Assessments in the IT Optimization section of the IBM booth. Find out how your company can use business analytics to dramatically lower the cost of running your backup and recovery or primary storage infrastructure.
I am often asked... "When can I use FlashCopy Manager with my EMC disk array?" (substitute "EMC" with your favorite vendor)
With FlashCopy Manager for Windows, you can leverage hardware snapshots for any disk array that has a VSS Hardware Provider. This is because Windows has a built-in architecture (referred to as "VSS") that enables pluggable snapshot support. We wrote a developerWorks article that explains how this works and how it integrates with TSM a few years ago. (Note: This article refers to "TSM for Copy Services" instead of "FlashCopy Manager" because it was written before the product name was changed.)
But, with FlashCopy Manager for UNIX and Linux and FlashCopy Manager for VMware, you must wait until support is added for your desired disk array. Last year, IBM partnered with Rocket Software to develop a device adapter pack that plugs in to FlashCopy Manager for UNIX and Linux and FlashCopy Manager for VMware to extend support to more storage devices. You install it on top of an existing FlashCopy Manager (version 4.1 or later) installation on the application server being protected by FlashCopy Manager for UNIX (or on the proxy backup server in case of FlashCopy Manager for VMware) and configure it to talk to the storage device. After that, you are able to leverage the power of FlashCopy Manager snapshot protection for the hardware device supported by that device adapter pack!
At the end of last year, Rocket Software released support for EMC Symmetric (VMAX and DMX). They are planning to add more disk arrays in 2014. If you have devices that you want to see added, contact Rocket Software.
With a new school year underway, vacation season for many come and gone and the Labor Day long-weekend upon us in North America, entering September marks the unofficial end of summer. For many this is a somewhat depressing time of year as we realize that colder temperatures and the on-set of winter aren’t far off.
However it’s not all bad news. Some of us prefer outdoor activities in the fall and winter months and when it comes to business, the fall presents a renewed interest in sharpening our skills and seeking networking opportunities by attending industry conferences and events.
For Storage professionals in North America an ideal opportunity comes in the form of Storage Decisions New York on September 16 & 17. Storage Decisions New York plans to bring together over 500 end-users, independent experts, analysts, and top solution providers to engage in thought-provoking presentations, interactive networking opportunities, and sponsor showcases featuring the latest trends and technologies impacting the storage industry. The 2-day conference, scheduled for is the only place you will find the industry's foremost independent experts – and the most qualified group of storage professionals – under one roof.
As a platinum sponsor of Storage Decisions New York, IBM will have a multi-faceted presence at the conference with ample opportunities to engage with the storage community. One of the highlights is our Tech-in-Action talk, where IBM’s Storage Software Business Strategist Ron Riffe will outline IBM's point of view on The Critical Decisions for Improving The Economics of Storage. Ron will touch on a range of considerations including the need for improved administration, the role of software-defined and the impact of flash – just to name a few.
Over the course of the two-day event, IBM storage experts will be available in booths 24 and 25 to meet attendees and discuss practical solutions to today’s storage challenges. The IBM booth will also be where attendees can pick up their complimentary conference USB key which will loaded with conference-wide materials and presentations.
Storage Decisions New York is worth taking a look at as a great way to kick-off the fall conference cycle. If you're planning to attend stop by and visit us. If you happen to be on the west coast and concerned that New York is too far to travel, don't worry Storage Decisions is stopping in San Francisco on October 30.
Last Monday, EMC announced ViPR as its new Software-defined Storage platform. Almost simultaneously, Chuck Hollis described it as ‘Breathtaking’ in his usually excellent blog. I must admit, one thing I routinely find breathtaking about EMC is their approach to marketing. They have a knack for being able to take unexceptional technology (or, as in this case, combinations of technology and theories about the future), and spin an extraordinarily compelling story. With all seriousness and without tongue in cheek… Nicely done EMC! Read more.
IBM Edge2013 is fast approaching and while the conference includes four events within an event to appeal to wide range of attendees, the cornerstone of Edge from my perspective is the rich technical content to be delivered within Technical Edge.
TechnicalEdge is a 4.5 day technical event for IT professionals and practitioners focused on sharpening expertise, discovering new innovations and learning industry best practices. You can check out the published agenda of the over 350 sessions spanning 16 tracks Technical Edge that are sure to hit on the top IT trends, opportunities and challenges we collectively face.
Specifically related to Cloud & Smarter Infrastructure, we’ve embedded over 50 technical sessions, demos and hands-on labs specifically focused on Tivoli solutions with the majority going deep on Tivoli Storage capabilities. Further there’s an additional 30 related sessions of interest to Tivoli users (i.e. IBM Storwize V7000, IBM Flex System Manager, IBM GPFS, etc.). These 80 sessions are scattered across the 16 tracks within the Technical Edge conference. (Hint: You can find the majority of these sessions within the Business Continuity and Systems Management tracks)
Some of the session highlights I’m looking forward to seeing are:
“IBM's New Tivoli Storage Manager Operations Center” – Our new TSM GUI!
“IBM Tivoli Storage Manager and the Cloud” – This session will describe TSM’s multifaceted cloud strategy
“Protection of Virtual Machines using Tivoli Storage Manager for Virtual Environments and Tivoli Storage FlashCopy Manager”
Tivoli Storage Manager for Virtual Environments - Data Protection for VMware: Solution Design
Introduction to IBM's Virtual Storage Center (VSC) - Learn how you can gain storage efficiencies and grow your business using VSC’s capabilities.
How IBM SVC, Storwize V7000 and TotalStorage Productivity Center are used in real life to migrate data centers
Additionally, for those that want to roll up their sleeves and get their hands on some of these solutions, I would recommend the following hands-on labs:
IBM’s New Tivoli Storage Manager Operations Center hands-on lab
IBM Tivoli Storage FlashCopy Manager: New Features and Operation in Version 3.2
A double session - IBM Tivoli Storage Manager for Virtual Environments: Protecting and Recovering Virtual Machine Data
We will also have an interesting “Birds of a Feather (BOF)” session on Business Continuity led by Sanjay A. Patel – focused on using the Tivoli Storage Manager suite to help you proactively protect your data.
I encourage you to join us at the Technical Edge and enhance your knowledge of our Tivoli Solutions and look forward to “getting technical” in Vegas. Learn more and register today.
If you are like most of the clients I deal with, you are starting to recognize the storage part of your infrastructure represents a BIG opportunity for improvement in 2013 – in agility, in efficiency, and in cost. When demand (data growth) outpaces supply (ability of hardware vendors to increase areal density driving down costs) as dramatically as it has begun to do, something has to change in the way storage infrastructure is approached in order to help balance the equation again. That ‘change’ creates a perfect economic environment for vendor innovation resulting in creative new solutions for clients. If you have been paying attention to the storage space, you’ve noticed an increased investment pace as vendors explore technical innovations and try to explain these innovations to potential clients. One of my biggest frustrations though is when the industry can’t settle on terminology for describing a solution approach leaving clients thoroughly confused and paralyzed. Read more...
As I’ve been working with many members of the Tivoli Storage team to coordinate our involvement atIBM Edge 2013and as the conference nears it struck me the other day -- Edge really does have something for everyone. While the historical focus of this event has been storage—and storage content remains particularly strong—this year, IBM is expanding that focus to address related areas of IT optimization as well: cloud, smart analytics and big data, business continuity, and many more.
And not only the IT topics being covered are expanding -- so are the types of audiences that will have an interest in Edge. With four events under one roof, each aimed at the needs of different audiences, Edge 2013 promises not to disappoint – regardless of the role you have in IT. Just in case you aren’t sure if Edge is for you, below is the summary of the “four events within the event” and the highlights of Tivoli Storage in each:
Executive Edge:Executive Edge is a 2.5 day event for IT executives and leaders focusing on discovering new innovations for managing storage growth, accelerating cloud deployments, unlocking the insights from big data, and securing critical information and processes. Deepak Advani, General Manager of IBM Tivoli Software, who takes the stage multiple times in Executive Edge will host a two- hour session entitled "Key Insights for Modernizing Your Data Protection Infrastructure" designed to help you shield critical data from threats both known and unknown. Deepak will not only provide IBM’s perspective on this critically important topic but will invite clients, analysts and IBM Business Partners to the stage to join the discussion. In addition to wealth of thought leadership this session will unveil the latest enhancement to the Tivoli Storage portfolio: IBM Tivoli Storage Manager Operations Center that was previewed to rave reviews earlier this year at Pulse. More details on Executive Edge can be found by previewing the agenda.
Technical Edge: Technical Edge is a 4.5 day technical event for IT professionals and practitioners focused on sharpening expertise, discovering new innovations and learning industry best practices. Featuring over 350 sessions to choose from, you’ll hear from product and developments experts with deep technical expertise who will not only introduce new IBM offerings and updates, but will put them into action through hands-on labs and demonstrations that closely match real-world operating conditions. In particular, for those interested in Storage software there will be 35 Tivoli Storage-led sessions spanning 9 of the 16 technical tracks, with especially deep content in the business continuity and systems management tracks. For more specifics check out the Technical Edgeagenda.
MSP Summit: For Managed Service Providers (MSPs), the two-dayMSP Summitfocuses on topics that are unique to this community and is designed to help organizations accelerate service delivery to become next generation MSPs. Strategic discussions will include topics such as Cloud, Next Generation Systems & Storage, and Big Data. One particularly interesting business opportunity for MSPs today is Cloud-based Backup as a Service with two successful Tivoli Storage MSP’s (Cobalt-Iron and Front-Safe) slated to share their experiences.
Winning Edge: The tail end of Edge will include a three-day sales bootcamp catering to the needs of IBM Business Partner (BP) sales professionals. Everything BP’s need to know to be successful will be discussed such as the opportunities related to IBM’s Butterfly acquisition, Storage Virtualization supported by marketplace perspectives of an independent Storage consultant. This information won’t just be theoretical; it's all founded on the real-world, quantified results already being achieved by IBM Business Partners and customers around the world, also to be discussed.
While I’ve tried to highlight the four discrete events within the Edge event this really only scratches the surface of Edge. There’s all the other valuable aspects – the hours of networking opportunities, Executive 1x1’s and the Solution Expo Hall where you can connect with subject matter experts from over 50 sponsors just to name a few.
Clearly I can’t do the Edge conference justice in a single blog but hopefully this gives you a sense of what you can take advantage of and that there truly is something for everyone at Edge. To learn more please check out the conference website and I hope to see you in Vegas in June at Edge2013.
I’ve been on the topic of software-defined storage for three posts now – one with my perspective, one covering a multi-vendor round table at Storage Networking World, and now on an intriguing bit of research.
Earlier this year, IBM sponsored EMA research into Demystifying Cloud. The project was intended to collect lessons learned from organizations of all sizes that had completed at least the first stage of their initial private cloud deployment, and then use that data to provide guidance to organizations considering the purchase of cloud technologies. Along the way, EMA discovered what most folks would not have predicted -- the critical role of storage for companies of any size and vertical when planning and implementing a private cloud.
Check out my latest post in TheLine for the rest of the story.
I’m just returning from the SNW Spring conference in
Orlando. It seemed sparsely attended but my 5-foot tall wife of almost 28 years
has always told me that dynamite comes in small packages (I believe her!).
As I noted in my last post, I was in Orlando to participate in
a round table discussion on storage hypervisors hosted by ESG Senior Analyst Mark
Peters. I was joined by Claus Mikkelsen - Chief Scientist at Hitachi Data
Systems, Mark Davis – CEO of Virsto (now a VMware company), and George Teixeira
– CEO of DataCore. Conspicuously missing from the conversation both at this SNW
and at a similar round table held during the SNW Fall 2012 conference was any
representation from EMC. More on that in a moment.
The session this time drew a crowd roughly three times the
size of the Fall 2012 installment – a completely full room. And the level of
audience participation in questioning the panel members further demonstrated just
how much the industry conversation is accelerating. I was pleased to see that
most of the discussion was focused on use
cases for what was interchangeably referred to as storage virtualization, storage
hypervisors, and software-defined storage. Check out my new blog – TheLine – for a view on a few of the use cases that were probed on.
Great timing on this post, Ron. I was just reading an editorial by
Rich Castagna on searchstorage.com
pondering whether "software-defined storage" was just the latest
marketing hype and that vendors were somehow claiming that this new
capability would make storage hardware obsolete.
You make the point, very eloquently, that this is not hype, that it
has been deployed widely, and that it does not mean the end of
storage hardware -- it just shifts the quantity and the costs of
the storage hardware in the customer's favor, while also creating
new use cases that solve real concerns in the market.
Back at the Storage Networking
World Fall 2012 conference, I participated in a round table on storage
hypervisors hosted by ESG Senior Analyst Mark
Peters. I was joined by Claus Mikkelsen - Chief Scientist at Hitachi Data
Systems, Mark Davis – CEO of Virsto (now a VMware company), and George Teixeira
– CEO of DataCore. Following the conference, Mark Peters posted a very nice series
of three video blogs with perspective from the round table participants. They
are worth a listen.
The discussion is
continuing at SNW Spring 2013 at Rosen Shingle Creek in Orlando, Florida. The
panel discussion "Analyst Perspective: The Storage Hypervisor: Myth or
Reality?" will happen on Tuesday, April 2 at 5:00 pm EDT.
As we prepare for the round table next week, I thought it worthwhile
to offer a point of view on storage hypervisors. Check out my new blog – The Line –
for more information.
Royse Wells, Chief Storage Architect for International Paper discusses Tivoli Storage Manager Operations Center, previewed at Pulse 2013.TSM Operations Center is a new graphical interface that helps administrators and management get quick summary views about the backup environment and simplify administration.
Jeff Jones, UNUM
UNUM Uses Tivoli Storage Manager for Virtual Environments
Jeff Jones is senior infrastructure manager at UNUM, a leading provider of financial protection benefits in the US and UK.UNUM has about 85% virtual servers today.UNIM uses Tivoli Storage Manager for Virtual Environments to deliver faster backups and restores, and reduce the risk of data loss for 650 Windows and Linux VMs.
Klavs Kabell, IT-WIT
Modernizing Backup for Today’s Virtual Environments
Klavs Kabell is a Senior System Consultant at IT-WIT, an IBM Business Partner in Denmark specializing in backup solutions.Klavs discusses how backup solutions have evolved, as VM deployments have grown.Tivoli Storage Manager for Virtual Environments helps simplify VM backup administration and tracking, while incremental ‘forever’ technology improves storage efficiency.
Thomas Bak, Front-safe
Cloud backup and archive using TSM and Frontsafe Portal
Front-safe received the Best Cloud Solution award at the IBM Pulse 2013 conference, and the 2013 IBM Beacon Award for the Best Solution to Optimize the World’s Infrastructure.Learn about the value of enabling backup as a cloud service, using Front-safe Portal software.
Laura DuBois, Program VP of Storage for IDC, and Steve Steve Wojtowecz, IBM VP of Storage and Networking Software discuss client opportunities and requirements for storage clouds and compute clouds.Client cloud storage requirements include backup and archive clouds, file storage clouds, and storage that supports compute clouds.
Chris Dotson, IBM CIO Office
IBM’s storage transformation featuring SmartCloud Virtual Center
Chris Dotson works in IBM’s CIO Office as a Senior IT Architect for Services Transformation.He is guiding IBM’s own storage transformation.As a large enterprise, IBM manages over 100 petabytes of data, growing at 25% per year.Chris discusses block storage virtualization, automated block storage tiering, file cloud storage, and automated block storage management at IBM.He shows how SmartCloud Virtual Storage Center is reducing storage costs by 50% with no noticeable performance impact to users.
BJ Klingenberg, IBM Global Technology Services
IBM Global Technology Services Uses Tivoli Storage Productivity Center (TPC) to Manage Clients’ Storage Environments
BJ Klingenberg is a Distinguished Engineer and Enterprise Storage Management lead for IBM.BJ shares his experiences using IBM Tivoli Storage Productivity Center in IBM’s Service Provider environment. Service Provider environments are governed by Service Level Agreements, so managing capacity, performance and availability are essential capabilities. Storage efficiency is essential to remaining competitive.See how TPC helps IBM deliver outstanding customer service at competitive prices.
Jason Buffington, ESG Senior Analyst, and Tom Hughes, IBM Worldwide Storage Executive discuss business and technical challenges for data protection.Tom and Jason discuss new solutions and Best Practices for protecting data more efficiently and effectively for today’s cloud, mobile and virtual environments.
Colin Dawson, TSM Server Architect introduces Tivoli Storage Manager Operations Center, the next generation of backup administration from IBM. He describes how TSM Operations Center was designed and built, using extensive user feedback.
Jonathan Bryce, OpenStack Foundation founder and Todd Moore, IBM
OpenStack Provides Compute, Storage and Network Interoperability for Clouds
The OpenStack Foundation has gained 170 corporate and over 8,200 individual members since its inception in 2012, making it one of the fastest growing cloud standards.Jonathan Bryce, Executive Director and founder of the OpenStack Foundation, and Todd Moore, IBM Software Group Director of Interoperability and Partnerships discuss the capabilities and opportunities for building cloud solutions using OpenStack to manage compute, storage and network resources.
Deepak Advani, IBM
Optimizing IT Infrastructures for Today’s Workloads
Deepak Advani, General Manager of Tivoli Software discusses top issues and opportunities facing clients as they adopt new breeds of applications to engage with customers and improve operations using mobile devices, cloud and analytics.
Wow!What an exciting week it has
been at Pulse 2013!Especially, for Tivoli Storage!In addition to the inspiring words of Day 3’s
keynote speaker, Peyton Manning, I was also equally inspired by many of our
Tivoli Storage Business Partners like Cobalt Iron, Silverstring, Front-Safe and
STORServer, who led sessions on exciting topics like how to create a cloud
service in a TSM environment, to how to transform your data backup costs into
Lets start with the final day of Pulse General
Sessions which kicked off to a packed auditorium…..Jamie Thomas, IBM
Tivoli VP of Strategy and Development, took the stage first with a panel of IBM
Experts including CTOs Dave Lindquist, Jerry Cuomo, Sandy Bird,
and Sky Matthews.These key
technology leaders, with Jamie facilitating the discussion, led us through
their technology roadmaps around what’s new and what’s coming in Cloud,
Security Intelligence, the Mobile Enterprise and Smarter Physical
Infrastructure.Following this panel
discussion, Bruce Ross, General Manager for IBM GTS, helped explain how
his team is helping to enable the acceleration of cloud services.It was a great line-up of experts and many
shared examples of how our technologies are driving innovation.You canwatch the replay of the General Session here.
Next up on the main stage was Pulse
2013 Guest Speaker Peyton Manning of the Denver Broncos.Peyton gave a heartfelt speech on the “art
and science of decision-making.”Are you
making the right decisions to deliver innovation?Are you sticking to your decisions?...were
some of the key topics covered.Offering
his perspective on Leadership, I think my favorite all-time quote was
this:“You can either be a warrior or a
worrier.”So true.Decisions backed by facts and data analysis
helps you drive the best decisions, and technology has greatly impacted this
process, he pointed out.Scott Hebner,
WW Tivoli VP of Marketing, joined him onstage with some great Q&A, that
ended with Scott going long, “up and out,” to catch a bullet pass from
Peyton.And, yes, pass caught, no
Now, on to our Tivoli Storage
sessions today which featured many of our Business Partners.Thomas Bak, CSO & Partner of
Denmark-based Front-safe, kicked off the morning with a very interactive
discussion on the topic of how to create cloud service around your Tivoli
Storage Manager environment.Front-safe,
a recent 2013 Beacon Award winner, is bringing TSM into new markets via a
cloud-enabled portal.He mentioned 3,000
end customers are already using this solution for backup, and backing up
literally 10,000+ servers!Front-safe’s
new cloud backup service provider, i-Sanity, also addressed their model of
“backup as a service” and they are the first Front-safe backup service provider
in South Africa.You can learn more by watching this great
Silverstring Ltd, another Tivoli
Storage Business Partner, led a session with customer Rabobank International, a
large global financial institution with many dispersed TSM systems, who told us
all about best practices that have been used for daily TSM administration.Great content was also shared that focused on
how these best practices and cloud-based automation software can be combined to
actually lower the cost of delivering TSM services and improve service levels.
Later in the
day, Richard Spurlock of Cobalt Iron
held an engaging session on how to transform your backup costs into business
opportunities.Cobalt Iron combines TSM
backup with a cloud experience in a simple deployment model that’s all about
flexible deployment options.Richard
really helped the audience better grasp how the costs and complexity of
enterprise backup can really “bankrupt” its value – and how Cobalt Iron’s
solutions can leverage your backup investments into a flexible and high
value data protection solution.Earlier
in the week, Cobalt Iron had been honored as a recipient of the 2013 Tivoli
Business Partner Awards finalist. Congratulations!
In one of the final Pulse 2013
storage sessions of the day, Business Partner STORServer delivered a compelling
presentation on how to provide Backup-as-a-Service with their STORServer Backup
Appliance and TSM.This session was of
great help for both large enterprises who were looking for how best to charge
for backup services, and also MSPs looking for additional revenue streams.
We also heard from
customers like Nyherji who told the audience all about how they use FlashCopy
Manager with TSM Node Replication to increase service levels and obtain high
availability.I was especially interested to hear about all the stellar benefits that these Tivoli storage solutions brought to their
business – from absolutely 0 downtime to hugely improved backup and recovery
times.As a result, Petur Eythorsson of
Nyherji, told a great story of how they completely redesigned their TSM
environment and added TSM Node Replication and FlashCopy Manager to complete
I wrapped up my week at Pulse
2013 visiting with both old and new friends and colleagues later that evening,
continuing to recap what was my 3rd, and, I believe, BEST Pulse
ever!Congratulations and thank you to the IBM Tivoli Pulse team for a job well done.....I can’t imagine how the next Pulse
will trump this one, but, in true Tivoli
fashion, I am sure it will.
And, in case you are having Pulse 2013 withdrawals already, we’ve captured some engaging storage
videos this week that are already available to you now. I hope you can take a moment to relive some
of the great Tivoli Storage moments this week and listen to all the great things that analysts, thought leaders, our customers, and Business Partners are saying
about Tivoli Storage solutions:
And, in case you would like to hear more about what’s
new and cool coming from Tivoli Storage, you can always join us again in Vegas
this June for IBM Edge2013, which will
bring you more opportunities to connect with your colleagues and learn about
industry best practices for storage management, virtualization, and cloud technologies.
Following an outstanding PurePalooza party on Monday night that featured a 2-hour performance by 6-time Grammy Award winner Carrie Underwood, you might have expected Tuesday’s General Session to be a little quieter than usual. However, that wasn’t the case at all as the energetic vibe from today’s session picked up right where Monday left off -- helping to quickly shake off the effects of a wild Monday night for many.
This morning’s 90-minute general session was themed “Best Practices in Action” and featured a client panel of IT leaders from AT&T, Equifax, Carolinas Healthcare System and the Port of Cartagena sharing how they are converting opportunities from Cloud, Mobility and Smarter Physical Infrastructures into tangible business outcomes.
The Unified Recovery & Storage Management track picked up on the General session theme with Tuesday’s breakout sessions featuring no fewer than TEN Tivoli Storage clients sharing real-life examples of how they were applying IBM Tivoli Back-Up and Recovery and Storage Management solutions to address a host of complex challenges. While this represents a just a tiny sliver of the valuable content, some of the session take-aways included:
• Irfan Karachiwala (Ph. D), Manager, Enterprise Data Strategy at Kindred Healthcare, a post-acute healthcare provider with over 450 locations in the U.S. has realized improvements in Recovery Point and Recovery Time Objectives by switching from data only backups to VM-based image back-ups using Tivoli Storage Manager for Virtual Environments;
• Redbook author Gerd Becker of Empalis Consulting, a German-based IBM Premier Business Partner recommended the use of TSM Fastback for Workstations to provide continuous protection and meet shorter Recovery Point Objectives (by the way you can try TSM Fastback for Workstations for free through the currently available trial);
• BJ Klingenberg, Distinguished Engineer in IBM Global Technology Services that uses Tivoli Storage Productivity Center to manage the storage environments of over 1000 accounts and 400 petabytes of data suggested taking 12-hour Storage environment snapshots to facilitate problem isolation and determination as part of a sound change and configuration management strategy;
• Petur Eythorsson of Nyherji, a Managed Service Provider in Iceland that provides management for 50 TSM servers mainly supporting mid-sized Windows-based environments confirmed that like many others, his client base has little patience for any recovery time beyond 2 hours;
• Huey Cantrell of Blue Cross Blue Shield of Louisiana reminded us that the overwhelming number of restore requests are for data that was recently backed-up so his IT organization spends it’s time and energy focused on the ability to quickly recover data created in the past few days;
• Eduardo Zalamena of Mitsubishi Motors of Brasil pointed out that within large organizations you can’t treat all data the same way. For example, a 2-hour restore time within some systems can be a catastrophic while for others it may be very appropriate. Eduardo recommended the implementation of system-specific recovery objectives to cost-effectively address requirements;
• John Clarke from United Healthcare that protects over 70 million Americans, has altered his teams approach back-up and recovery focus – emphasizing the restore over back-up primarily because of the emergence of Big Data systems such as Netezza and Teradata.
On a day that put IBM clients “front and center”, it was only fitting to close Tuesday with the Tivoli Storage Birds of Feather meeting. This two hour, highly-interactive discussion gave clients the opportunity to get all their questions answered and provide direct feedback to Tivoli Storage Executives, Developers and Product Managers.
Based on the buzz around the Storage breakouts it was clear that the client focus on Tuesday was a hit so a huge thank-you to all the clients who took the time to prepare and share their stories at Pulse2013. Pulse wouldn't be a reality without your contributions!!!
As another Pulse begins to wind down, it’s time to start thinking about IBM Edge2013 in June. The Edge conference will bring us back to Las Vegas to hear more clients describe how they are Optimizing Storage and IT. If you weren’t able to join the 8000 of us at Pulse2013, start making plans to attend Edge by finding out everything you need to know (including the early-bird discount available through the end of April) at the IBM Edge2013 Conference website.
Day 1 at Pulse 2013 was grand and exciting! I am not in Vegas and how do I know? No points for guessing, all thanks to
social media #IBMPulse and my friends in Vegas who are tweeting away every
moment of the event.
Today is for IBM’s business partners! Deepak Advani, Tivoli
General Manger reemphasized on the role of Business partners and how critical
it is for IT innovations to achieve business results.Todd has very well captured the essence
of this event in his Pulse
Day 1 was also the day of awards!! Congratulations are in
!IBM Tivoli Award for Best Data Management Center Solution
was picked up by CobaltIron. CobaltIron will cover their solution during their
Wednesday session, including how to transform data backup costs into a business
opportunity – a must attend session! (Watch out for session# 1914, Room# 114,
2:00pm on Mar 6)
Front-Safe got the IBM Tivoli Award for the Best Cloud Solutions.
Watch out for their session on March 6 (session# 2436, room# 114) highlighting how
to create a Cloud Service around IBM Tivoli Storage Manager.
Then came the real example of how we turn opportunities to positive
outcomes, Chris Gartner! We talk about technologies all day, but the one that
created ripples in twitter.com was the “Pursuit for Happiness”. Thanks Todd for
blogging from Pulse.
Day 1 evening was reserved for Birthday Celebrations! Yes, IBM
Tivoli Storage Manager Birthday turns 20!! The celebration marks two decades of
back-up and recovery management leadership. Solution expo hall was abuzz with storage
enthusiasts! Thanks to Dave Russell of Gartner Inc for having joined TSM's 20th
birthday celebrations!! Needless to say, what a great way to network with
Storage experts from around the world!
And what a way to end the Day 1, but a musical extravaganza!
Bella Electric Strings performed at the Expo opening.
Thanks folks for bringing Pulse to people who are tracking and trending the world over. What I may not know is how much you all won in the blackjack J..Have a good one!
seen tremendous growth in recent years due to the size, type, and complexity of
data that is being created. Multimedia files, efforts to go “Green”, and the
general desire to be more collaborative in our daily work with all of our data
on hand as needed has increased the demand for storage. Analysts state that
half of all data in existence today was created within the last five years.What does that say about what the next five
digital world, this growth is driving the need for new thoughts around the cost
and complexity of managing storage, a desire for hyper efficiency to get the
absolute most out of storage resources is now the norm.This
requires new inventions in storage techniques and data analytics. You have
likely heard of cloud. In fact, you probably back up some portion of your data
to a cloud service today, either from your personal computer, tablet, or smart
phone. These services enable self-service delivery of storage and are built on
an integrated, optimized infrastructure. Services like these are a model for
doing business that can be translated into datacenters all over the world.
The IBM SmartCloud
Virtual Storage Center is the cornerstone of our storage cloud services because
it enables users to easily migrate to an agile, cloud-based storage
environment—and then to manage it effectively. VSC delivers unique capabilities for our customers including
automated provisioning of virtual storage resources pooled across heterogeneous
storage platforms, built-in efficiency features that remove traditional
barriers to increase the utilization of your existing storage assets, and
non-disruptive data mobility so that your data is available no matter where you
are. Storage can be provisioned directly
from a catalog in a self-service fashion and then placed on the appropriate
class of storage based on the required characteristics of the workload, with
integrated data protection and resiliency to match service level requirements.
As workload characteristics change over time, VSC can help reduce costs by
migrating less critical data to less expensive media.
United Kingdom-based investment and insurance company recently implemented IBM
SmartCloud Virtual Storage Center to build a more flexible, automated approach
to managing data that they had not previously been able to. Their IT department
is now able to offer an improved service to all lines of business at a far
lower cost. They have been able to reduce the amount of physical disk drives
they will need to invest in moving forward, while improving performance, which has
allowed them to redistribute their IT budget into other key areas.
Here in the
United States, one of our major telecom providers has been leveraging Virtual
Storage Center to increase the utilization of their storage assets to over 80%,
while reducing their reliance on tier 1 storage to greatly improve their total
cost of ownership. In addition, they have been able to automate many of the manual
tasks associated with managing their enterprise data center, freeing IT
resources to focus on strategic initiatives.
SmartCloud Virtual Storage Center has experienced a meteoric rise within our
storage software portfolio, with over 60 new customers in just one quarter of
general availability. This integrated approach to dealing with the major
obstacles in enterprise data centers has been well received by analysts and customers
alike. Leveraging key capabilities from our existing storage software
portfolio, built on years of experience and leadership, IBM SmartCloud Virtual
Storage Center has helped customers address the demands of rapid storage growth
and provide built in resiliency to ensure 24x7 operations.
information on IBM SmartCloud Virtual Storage Center and Pulse 2013, please
check out Tivoli Storage at Pulse by clicking here.
Data protection matters! Actually it matters even more with the advent of big data. The unique challenges of managing & protecting big data has forced IT professionals to relook at their data backup & protection policies. Every year ESG conducts a forward looking spending intention survey. They shared a couple of interesting facts that do not surprise but definitely reinstate my thoughts. When organizations were asked what they would consider most important IT priorities over the next 16-18 months, 30 percent responded back saying “improved data backup & recovery”!
And when they were asked what they would characterize as challenges with their organizations’ current data protection processes and technologies, “cost” & “need to reduce back up time” came out to be the major concerns.
ESG analysts Mark Peters and Tony Palmer shared these insights as they took us through the results of their lab testing on Tivoli Storage Manager. If you are not familiar with IBM Tivoli Storage Manager (TSM), it is a scalable client/server software primarily designed for centralized, automated data protection. The goal of the ESG report is to educate IT professionals and provide insight into the advanced data backup technologies such as forever incremental back up, deduplication and why it is so important in current scenario. Click here for the ESG video.
The TSM Lab validation was performed using a combination of hands on testing, audits of IBM customers in live production environments and detailed discussion with IBM experts. The objective is to validate some of the valuable features and functions of the product and show how those can be used to solve real customer problems, and identify any area of improvement. IBM has continuously invested in TSM platform bringing innovation to data protection and recovery. ESG evaluates how the newer versions of TSM provide a turnkey solution to a range of data protection issues. They found that the two technologies (deduplication and progressive incremental backups) working in tandem were able to achieve 90 percent data reduction after just six incremental backups and 95 percent data reduction after ten backups. Replication function is also fully integrated with deduplication, thus optimizing quicker recovery during disasters. TSM uses policy-based automation along with intelligent move-and-store techniques, helping to reduce data administration efforts. Over all, ESG’s validation rightfully points to the key enhancements to the TSM platform that drive greater scalability, efficiency, and data availability. Please register and download the detail 23 page ESG Lab Validation Report here. Opinions are my own
If you are following the developments related to Pulse 2013 you’re likely well aware that Peyton Manning has been announced as the keynote speaker and the evening entertainment at Pulse Palooza will feature Carrie Underwood. If you’ve been to Pulse before you also know you can expect compelling thought leadership in the General Sessions and deep content in over 300 breakout sessions to choose from.
Over and above all that exciting news there’s one thing that keeps attendees coming back year after year – the opportunity to network. Each year following Pulse, attendees tell us through the post-Pulse survey that networking with the over 8000 conference attendees rises to the top as the most valuable aspect of the event.
The opportunity to network with your Storage colleagues at Pulse 2013 will once again be front and center at the conference. Formal opportunities exist such as the Storage Birds of a Feather session, Meet the Experts in Storage, Client Connections along with access to Storage subject matter experts from development and product management in the Expo Hall. And of course in Las Vegas there’ll also be plenty of informal gatherings to connect with Storage professionals to share knowledge and expertise.
A great way to start the networking process is to take in the numerous client-led sessions in the Unified Recovery and Storage Management track within the Cloud and IT Optimization stream at Pulse 2013. Following the track-kick off which features Dave Russell, Research Vice President at Gartner you’ll have the opportunity to hear IBM clients sharing their experiences, some highlights of which include:
• Learning about the experiences of Chesapeake Energy with the new TSM Backup and Recovery Dashboard based on their participation in the Early Adopters Program;
• Understanding how The University of Sydney is using SmartCloud Virtual Storage Center to provide centralized management of its diverse storage environment;
• Hearing how Banco de Brasil improved its backup capabilities by taking advantage of the latest advancements in Tivoli Storage Manager;
• A panel of experts from Blue Cross Blue Shield of Louisiana, Kindred Healthcare and Centene Health discussing how they are protecting healthcare data with IBM storage solutions.
While this is just a tiny sampling of the type of organizations that will take to the podium in the Storage track at Pulse there’s a wealth of experience to help you tackle your most pressing Storage Management challenges. Taking in the sessions is only the beginning – connecting with these storage professionals in the numerous networking opportunities at Pulse is how the conference truly comes to life.
If you’re already registered for Pulse start you can start networking now by connecting with the growing list of speakers and other conference attendees on the Pulse2013 Vivastream site and if not visit the PULSE 2013 home page for all the conference details and to register today.
Please plan to join IBM and thousands of your peers at the MGM Grand Hotel in Las Vegas, March 3 to 6, 2013.
PULSE 2013 is IBM’s premier event focused on business transformation and IT optimization, helping clients learn how to turn opportunities into outcomes.
As the planet becomes smarter, it becomes clear that a solid, robust, scalable and cost-effective IT infrastructure is required to create, store and manage all the information at the heart of these new opportunities.
Unified Recovery and Storage Management is the cornerstone track within the Cloud and IT Optimization stream at PULSE 2013. We are putting together a very excited agenda, and I’d like to give you a preview of what you can learn from your peers, thought leaders, and yes, a few IBMers, by attending this track.
We kick off the track on Monday with a keynote presentation by Dave Russell, Research Vice President at Gartner. Dave will describe the trends that his team is seeing, and encourage you to take a position on transforming and optimizing your data management infrastructure.
During the 3 days of breakout sessions, you will learn how many of our customers have started on this journey, including best practices and outcomes. Our speakers include subject matter experts from:
• Two major banks • Two universities • Five healthcare organizations • Several consumer and industrial companies • Five managed service providers • A leader in media and entertainment
Some of the top-of-mind topics that will be covered include management and protection of virtualized server and storage environments; advancements in disaster recovery and business continuity; storage in the Cloud, storage as the Cloud, and storage to the Cloud; backup consolidation and simplification; and how to easily cost-justify an efficiency improvement project to your management.
You can also learn how IBM “eats its own cooking” as the IBM Office of the CIO describes its use of IBM storage management software to drive costs out of our business while meeting the computing demands of a company the size of IBM.
You will also have the opportunity to learn about new products and enhancements – we can’t tell you what they are yet, but we’re pretty excited.
You can see who our expert speakers are, what they’ll be speaking about, and start to build your experience at Pulse this year by visiting the Pulse SmartSite and Agenda Builder at: http://www.pulsesmartsite.com/
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
We’re getting deep into the planning for our 6th annual PULSE conference (ibm.com/pulse), and I’m getting very excited about the storage content that is being assembled. Again, it will be at the MGM Grand Hotel in Las Vegas, March 3 – 6, 2013.
At our Storage Track Kickoff session, we’ll have some new things to announce and highlight, and we’re close to announcing an exciting keynote speaker.
Following the track kickoff, we’ll have 20 breakout sessions on data protection and storage management topics, covering advances in virtual machine protection, disaster recovery, cloud integration, and a lot more. We’re mixing it up a lot more this year to ensure you get a range of perspectives. We’ll have 21 client speakers discussing their experiences and best practices; plus 8 business and technology partners providing insights into added value approaches to storage management who will be complemented by IBMers sharing the new stuff we’ve been working on.
Among the client speakers will be storage professionals from across the globe representing major banking, healthcare, media, industrial and university organizations. There will also be sessions on a variety of cloud topics, including private cloud storage and backup-as-a-service opportunities.
To follow on a theme mentioned by Steve Mills in his keynote at PULSE 2012, we’ll show how IBM “eats its own cooking”, sharing how IBM’s Office of the CIO transformed its massive storage infrastructure; and how IBM’s Strategic Outsourcing services organization is leveraging our products to more effectively manage their clients’ storage environments.
There will be many cool things to see in the expo center again this year, including offerings from many of our ecosystem partners, and you can roll up your sleeves in the hands-on labs and product training and certification areas.
Have you heard about this year’s PULSE PALOOZA entertainment? We rocked the Grand Garden Arena with Maroon5 in 2012, and will follow that with Carrie Underwood in 2013.
Now’s the time to act. Early bird registration, which saves client attendees $500 off the conference fee, closes December 31st. Go to http://ibm.co/pulseregister and get ready for an outstanding event. I look forward to seeing you there.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Server virtualization and storage virtualization go hand in hand. Centralized, virtualized storage is crucial for advanced server virtualization to be flexible and easy to manage.Companies are realizing that to unleash the real potential of cloud agile infrastructure, storage virtualization has to become mainstream like server virtualization.
For many companies, there is a constant need for additional storage resources to supportgrowing volumes of information.And if you don’t focus on managing your storage infrastructure, you can find that one virtual server is running out of storage capacity, even while there is ample capacity in other parts of the network.
With storage virtualization, companies now can make better use of existing investments in disk capacity and can often postpone the need to purchase additional capacity. As storage becomes virtualized, it becomes easier to manage and helps companies to adapt to business needs much faster.And not to forget, it actually costs less!
For the past 4+ years at IBM, I was the worldwide product marketing manager for the IBM Tivoli Storage Manager (TSM) family of data protection and recovery solutions. But as of a month ago, I am now in a brand new role, Customer Experience Product Manager across the Tivoli Storage software group.
I’ve been asked to bring together various IBM business functions to help identify areas for customer experience improvement, measure success, and generally be the champion for all things that will help address a customer’s experience with our products.
In addition to reading and learning and reaching out to like-minded people, my initial efforts have been to assess where we are and the progress we’ve made to date. For example, we’ve made tremendous progress within the TSM family over the past 4 years starting with the release of TSM v6.1 – adding in valuable features such as deduplication, replication, monitoring and reporting, while also streamlining and unifying the user interface. We also introduced a new pricing model that makes it much easier to acquire, manage and forecast your backup software licenses.
The next release of the TSM family will be announced in early Q4-2012, and it will include some exciting enhancements to improve the user experience in TSM, TSM for Virtual Environments and Tivoli Storage FlashCopy Manager. And next year we’ll be rolling out a brand new user interface that promises to make the day-to-day administration of TSM much, much easier.
We’ll also be rolling out the IBM SmartCloud Virtual Storage Center, our storage hypervisor that was pre-announced at PULSE in March, and it promises to do for storage what VMware and others have done for servers: improve utilization, simplify management, and reduce costs. There are many upcoming events in October that will feature these Tivoli Storage product announcements, and we hope you can join us to hear more details. They are:
Your experience with our products is only one pillar in the building that we’re trying to construct. How does it tie in with your interactions with our marketing and sales teams? Does our story match our solution capabilities? Are you getting what you were promised?
Does the product documentation provide the answers to your questions, or help to avoid needing to ask the questions at all?
And what about support? A recent survey by STORAGE Magazine of enterprise backup software users rated IBM #1 in the category of support. But what can we be doing better, or differently, to keep you delighted in your relationship with IBM?
At the end of the day, I’m looking at this role in 2 ways: - What can we fix or improve? - How can we ensure that everything new that we do is viewed through a customer experience lens?
To be successful, I’m going to need the help of a lot of people who have a vested interest in our success – across all facets of our business, but especially from our customers and partners. Please reach out to me at firstname.lastname@example.org with any ideas or comments.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Recently, I had the distinct pleasure to deliver
a presentation on Data Storage and Compliance at the IBM Tivoli event 'Business
Without Limits 2012' in Bangalore,
India.There are more than 100 attendees who
attended the event from almost every industry.
My Track for the day: Addressing Data Growth, Threats and Compliance; Unified
The volume, velocity and importance of data have
increased dramatically during the past few years to the point where most backup
and archiving solutions can't keep up with the scalability, functionality,
performance, reliability and budget realities of today and tomorrow. Attendees
understood how to reduce backup data capacity by as much as 95%; how to reduce
the amount of new data at risk by 90% or more; and how to simplify global data
recovery operations and achieve compliance by leveraging a unified management
I was privileged to present in such an interactive
session, where customers understood how our broad product portfolio will help
in addressing their business challenges.
IBM now brings ‘Business Without Limits 2012’ to several cities across the United States
in October and November.This is an
exclusive IBM Tivoli event designed to increase awareness and thought
leadership among the IT managers, Infrastructure leaders, Systems
Administrators, Storage Managers, and Data Center Managers. IBM’s Business
Without Limits event is coming soon to the following cities:
The event will focus on how IBM’s Integrated Service Management strategy
brings together the different capabilities to enable integrated delivery of
business services across complex, interconnected physical and digital
IBM’s Business Without Limits event will have the following Storage Tracks:
the pivotal role of storage in modern data center
backup and unifying recovery
your data protection headaches to the cloud
storage analytic and reporting
This conference will explore how you can capitalize on the opportunities of
a smarter planet and remove the barriers to innovation that will help you
achieve “Business without Limits.”As
today’s leaders are transitioning to smarter, flexible cloud infrastructures
that speed the delivery of innovative products and services, effective storage
management becomes a critical component of that success.Please join us at this event to learn more!
As an IBM marketing manager, my job includes writing about storage technology.This post is about more than technology, though.It’s about a new breakthrough capability for managing storage costs and service levels.
I recently met with IBM Distinguished Engineer, Mike Sylvia, who has been working on a Business Transformation project to enable automated right tiering for storage in IBM data centers.Right tiering is the notion that data should be hosted on the optimal storage tier to balance cost and performance requirements.
Mike explained that applications tend to be hosted on top tier storage.When he analyzed actual usage patterns, Mike found most data can be effectively hosted on lower cost storage.Mike’s project put numbers to a problem that is often hidden from view and, until now, nearly impossible to solve.
Hosting data on the wrong storage tier turns out to be a huge efficiency problem.Mike predicts IBM will save $13 million over 3 years in one data center, by periodically moving data to the right tier.During the pilot, users saw their cost for storage drop by 50% per TB on average.This is big.
Like many advancements, IBM’s automated right tiering capability is accomplished by integrating existing technology.Mike Sylvia’s project combines storage virtualization, storage management automation and analytics.Today, IBM offers the technology in a bundled solution called SmartCloud Virtual Storage Center.
How does it work?
Step 1.IBM’s storage virtualization controller collects detailed usage metrics about storage it manages throughout the data center, without impacting application performance.
Step 2.IBM’s Storage Analytics Engine studies usage patterns over time to understand performance requirements.
Step 3:Storage tier recommendations are generated in reports that can be shared with application owners and IT management.
Step 4:Storage virtualization enables online data migration, with no disruption to applications or users.
Repeat:Usage patterns change over time, of course, so right tiering becomes an ongoing process.
Why does it work?
Automated right tiering delivers the efficiency benefits of Information Lifecycle Management without the headaches and hidden costs.Automated right tiering has significant benefits for both data owners and IT leaders, so everyone wins.
For example, application and database owners can gain the following benefits:
Applications can move to top tier storage when they need it, without waiting for a maintenance window.
Average storage costs drop significantly, without a drop in services.
IT leaders benefit, too.For example:
Storage tier decisions are based on analysis of actual usage patterns, not predictions.Storage performance management tasks are eliminated.
Data can quickly and easily be moved back to its original storage tier if requested, without incurring an outage.
IBM automated right tiering works with most storage systems, so deployment is nondisruptive.
The technology that enables automated right tiering has significant additional benefits, such as the ability to eliminate scheduled outages for storage system maintenance.
Problem solved.How has your organization addressed the storage right tiering challenge?
IDC has recently released its Worldwide Storage Software QView for the first quarter of 2012. In it, IDC estimates that the total Storage Software market for 1Q12 grew about 3.3% over 1Q11.IBM had a solid quarter while Symantec faltered, allowing IBM to take the overall #2 share rank position for 1Q12.
§In the Overall Storage Software Market, IBM moved up to #2 share rank position in 1Q12, gaining 2.0 share points over 1Q11.
§IBM retained its #1 position for Archiving Software growing faster than the market.HP holds #2 spot with its 2011 acquisition of Autonomy.
IBM offers a comprehensive, flexible storage management software portfolio that helps organizations address storage management challenges across the enterprise, including data centers, remote/branch offices and desktop/laptop computers. Learn more about the specific components within the IBM storage software family that can help you create a more responsive and resilient storage infrastructure for your on demand business.
In a previous post, we talked about the recent reviews that the Tivoli Storage Productivity Center (TPC) received. In particular, we are very pleased with the 'Leader' designation received in the recent Gartner Magic Quadrant review for Storage Resource Management.
Its not just the analyst reactions that are positive. Based upon a customer focused feature list, the product team undertook an overhaul of the Graphical User Interface (GUI) and introduced a dashboard that provideseasy touse comprehensive reporting. To ensure they had got it right, the proposed changes were demonstrated on the Expo floor at the Pulse 2012 conference in Las Vegas earlier this year. Responses from the user base were enthusiastic, to the extent that this next iteration is quickly becoming a sought after item.
A beta test program was initiated at the conference, as the true litmus test that the proposed new features would stand the test in a true production environment. Early responses point to some interesting observations. When polled about their experiences with the next evolution of the product, one of the most talked about aspects were features provided to simplify complex reporting. Beta testers derived great time and productivity benefits from having a picture of the full storage environment; something they had to previously go to multiple places together. A common benefit registeredwas time savings when it came to complex reporting.
What is compelling however is the business analytics that this next iteration yields. Tivoli Storage Productivity Center (TPC) provides detailed topology viewsof the entire storage infrastructure. In the overhauled GUI, administrators can observe the overall health of the environment instantly. A simple 'right click' provides detailedviews on each of the storage network entities. The facilitation of these environment wideviews led a beta customer to observe that 'more than just the storage engineer can now get a simple view of their SAN environment'. What does this mean? It means that what started out as a time saver for the practitioners - the storage engineers - now becomes an entryway for the management team to get a quick look at the overall environment, allowing for higher level strategic discussions about storage environments and needs.
Is this good or bad? A recent survey revealed that CMOs will outspend CIOs on IT by 2017. When I tweeted this I was asked by @jamie_joyce why it would take this long. My answer is that its likely due to the classic tension between a cost saving position on infrastructure vs a growth position on Business Analytics or feature offerings. When you think about Big Data within Business Analytics and the proliferation of mobile devices as two huge growth areas, the commonality is a mass proliferation of data in orders of magnitude never imagined. The conversation comes back to storage,and the associated resource management.
Which way does your company lean? Where is your head in that tension between cost savings and growth when it comes to your storage environment?
I chatted with Product Marketing Manager Amalore Jude about this and the kind of reaction the team got at Pulse in Vegas March of this year when they demo'd thenew GUI interface. He was quite pleased with the response. 'Customers were very excited looking at the new, next-generation interface' he told me 'many are awaiting June 4, when they can actually lay their hands on it.'
Well, June 4 is around the corner. If you are a regular reader of this blog, its quite likely I will meet you at the Edge Conference in Florida next week. If you're there, please tweet me @brenny or find me somehow and say hi.
The conference is selling out but there are still passes available for the Tech Edge portion of the four part event. It's not too late to register. The Tech Edge portion is well laid out with over 250 sessions that are being led by IBMers and customers. Sometimes its better to hear the war stories of your peers when you're trying to figure out how to exploit what you have, or are considering getting.
One customer who is speaking is Gary Fry of Unum. His session on March 6, 10-11am in Rm 115 is on Unum's use of the SAN Volume Controller and his experiences beta testing the new evolution of TPC.
So, if you are going, then I hope to see you out there. If you haven't yet decided, then getting a first look at this next evolution of storage infrastructure management is hopefully good motivation to consider it.
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs
nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data
created and used has been in the form of digital data. Today, the world
produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable
than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This
presents a real concern of a “Digital Dark Age” where digital storage
techniques and formats created today may not be viable in the future as the
technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A
storage tool that was so ubiquitous people still click on this enduring icon to
“save” their digital work and any word, presentation or spreadsheet
documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly
denser than they are today. While new form factors such as solid state disks
will help us provide more stable longer-term preservation of data, and the
promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard
drives and solid-state memory to overcome challenges of growing memory demand
and shrinking devices. Called Racetrack memory, this breakthrough could lead to
a new type of data-centric computing that allows massive amounts of stored
information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical
limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide
structure in midst of the data deluge
Now that we have the capability to preserve our digital
universe, we need to find a way to make it useful. We need to take the next
step past data preservation to data curation. Data curation is the active and ongoing management of data
through its lifecycle. This smarter data categorization adds value to data that
will help glean new opportunities, improve the sharing of information and
preserve data for later re-use. Social media is a great example of the power of curated
data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives
and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting,
appraising and organizing data to make them accessible and interpretable. The
key is bringing data sets together, organizing them and linking them to related
documents and tools. If data can be stored in a way that provides context,
organizations can find new and useful ways to use that data.
3) Storage analytics will open
new business insights
With data curation allowing organizations the platform to
better utilize their data, analytics will help turn that data into intelligence
and, ultimately, knowledge. With the information that historical trending analytics
and infrastructure analytics provides, you can index and search in a more
intelligent way than ever before. By doing analytics on stored data, in backup
and archive, you can draw business insight from that data, no matter where it
exists. The application of IBM Watson technology for healthcare
provides a good example. Watson collects data from many sources and is able to
analyze the meaning and context. By processing vast amounts of information and
using analytics, it can suggest options targeted to a patient's circumstances,
can assist decision makers, such as physicians and nurses, in identifying the
most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we
can learn more with the information we have today to improve service to
customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity
– new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain
industries are able to reach new levels of innovation by having the capacity to
house, organize and instantaneously access information. For example, Hollywood is known for its big budget
blockbusters, but it’s the big storage demands required by new formats such as
digital, CGI, 3D and high definition that’s impacting not just the bottom line,
but studios’ ability to produce these types of movies. Data sets for movies
have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs
as just one day’s worth of filming can generate hundreds of terabytes of data.
The popularity of these high data-generating formats means studios are looking
for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger
data dilemma than the entertainment business. Take a look at the Institute
University of Leipzig, in Germany, which has a major genetic study called LIFE
to examine disease in populations. LIFE is cataloging genetic profiles of
several thousand patients to pinpoint gene mutations and specific proteins.
This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of
data per year. Those figures will only grow with higher-resolution medical
imaging, and new tools or services such as making electronic healthcare records
5) Intervention...The Data
In this era of Big Data, more is always better, right? Not
so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too
much time and money collecting useless or bad data, potentially leading to
misguided business decisions. This practice can be changed with simple policy
decisions and implementing existing capabilities in technologies that exist in
smarter storage, but companies are hesitant to delete any data (and many times
duplicate data) due to the fear of needing specific data down the line for
business analytics or compliance purposes. Part of the solution starts with eliminating the copies.
Nearly 75% of the data that exists today is a copy (IDC). By deleting and
disabling redundant information, organizations are investing in data quality
and availability for content that matters to the business. Consider the effect
of unneeded data, costing money by replicating throughout an organization’s information
systems. This outdated data can also potentially be accessed for fraud.
the quality of data is not costly—not getting it right is.
ARE YOU SPEAKING AT PULSE? IF SO, READ ON PLEASE...and book your room at the MGM Grand today to avoid a price increase!
1. Have you uploaded your presentation? The deadline to upload presentations was January 20th to enable appropriate reviews and posting to the Pulse 2012 SmartSite Agenda Builder. Your presentation will be converted to PDF and can be downloaded or printed in advance by attendees, pending your approval. For a full list of presentation guidelines and processes please review the Presentation tab on the online Speaker Kit.
2. Do you know what audio visual equipment will be available in your session room? Click the A/V tab in your online Speaker Kit to review this important information.
3. Are you connected? Follow the conference news & highlights on Twitter or the Pulse blog. Click the Speaker Kit tab to find links and hashtags for use with social media. Find Pulse attendees using the Pulse SmartSite agenda builder.
4. Attendees are always interested in getting to know their speaker! Do you have a bio? Review and update your brief bio by logging onto the Speaker Kit website.
5. Have you started to build your Pulse conference agenda on SmartSite, the attendee conference portal? You will need your conference registration confirmation number to login to this site. Click the Build My Agenda icon to view scheduled sessions.
6. Have you registered for the conference and booked your hotel? Review the registration instructions listed in the registration tab on the speaker kit website.
Very important... Conference hotel accommodations are limited and available on a first-come, first served-basis. Conference rates are valid until January 27, 2012 or until the room block is sold out, whichever comes first.
Please take a few minutes to review the information in your online Speaker Kit, and follow-up on all speaker actions as needed.
If you have any questions or need additional information, please contact the speaker support at PulseSpeaker@experient-inc.com. We look forward to seeing you at the MGM Grand in Las Vegas March 4-7!
IBM has detailed innovative projects and research that show new
storage approaches to support Big Data growth and drive business innovation.
Healthcare, financial services, media and entertainment, and
scientific research among many industries face the challenge of storing and
managing the proliferation of data to extract critical business value. As
storage needs rise dramatically, storage budgets lag, requiring new innovation
and approaches around storing, managing, and protecting Big Data, cloud data,
virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute
for Computational Cosmology (ICC) at Durham University in the U.K. and Business
Partner OCF to build a storage system to better store and manipulate Big Data
for its cosmology research on galaxies. ICC is adopting the same IBM General
Parallel File Systemtechnology used in the
IBM Watson system to store and manage more than one petabyte of data from two
significant projects on galaxy formation and the fate of gas outside of
galaxies. The enhanced storage system will enable up to 50 researchers, working
collaboratively to access and review data simultaneously. It will also help ICC
learn to manage data better, storing only essential data and storing it in the
New Storage Platform Delivers More Personalized, Visual
Healthcare: A medical archiving
solution from IBM Business Partners Avnet Technology Solutions and TeraMedica,
Inc. powered by IBM systems, storage and software gives patients and caregivers
instant access to critical medical data at the point-of-care. Developed in
collaboration with IBM, the medical information management offering can manage
up to 10 million medical images, helping health care practitioners provide
better patient care with greater efficiency and at reduced costs. The
integrated platform allows users to manage and view clinical images originating
from different treatments and providers to bring secure, consistent image
management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a
general medical and surgical hospital in Visalia, Calif., needed to reduce its
operational costs while increasing storage space. To meet these demands, KDHCD
tapped IBM's storage systems to create a new storage platform that reallocates
resources and saves a significant amount of data space with thin-provisioning
technology. Virtualization creates a smaller hardware footprint so the hospital
also saved on power and cooling costs. KDHCD now has a consolidated storage
environment that provides the scalability, ease-of-management, and security to
support critical healthcare data management for the hospital.
IBM is looking for customers and business partners who are interested in participating in an Early Access Program (EAP)/Beta Program for an upcoming release of FlashCopy Manager, Data Protection for SQL, and Data Protection for Exchange. If you would like to nominate your organization to participate in this EAP/Beta, please send an email to:
Mary Anne Filosa (email@example.com)
and be sure to include your organization's name. Once your email is received you will be sent instructions on signing off on the EAP/Beta legal form online and when that signoff has been completed, you will be sent a link to the program's nomination site. We encourage you to respond quickly if you are interested as the program begins in mid December.
Live Webcast: Using Tivoli Storage Productivity Center to be the "eyes" into your SAN environment, and to see how that environment is changing. LIVE!
In the ever changing SAN environment, Tivoli Storage Productivity Center has many components to help the Storage Administrator know when a where to focus their attention. We will walk through many of these in a live demo and see how they can be used.
Let TPC help you keep up with storage growth instead of working longer hours!
Scott McPeek, IBM Program Director, Storage Sales Enablement. He has worked in the software industry more than 30 years, the last ten years have been with IBM as part of the TrelliSoft SRM acquisition. Scott now focuses on storage resource management, storage performance management and virtualization with products like TPC, SVC and the Storwize V7000.
How are you spending your time this weekend? Polishing up your Pulse 2012 storage session abstract, hopefully! With only 4 days left to submit a 100-word abstract by Nov. 7, we thought it would be helpful to share some final pointers. Keep in mind that this year's theme
is Business Without Limits and we are seeking to understand how you
gained visibility, control and automation to deliver better business
What are the key benefits to you as a Speaker? One full Pulse conference pass ($1995 value) and the opportunity to gain visibility for your company, and take advantage of an incredible networking opportunity with over 7,000 industry experts, press, and analysts.
Here's some pointers on how to get your Storage Management session abstract accepted: 1. Focus it on topics such as how you used Tivoli Storage Manager to manage "big data"; success with recent upgrades; or cloud storage 2. Tell us about the key business challenges you were trying to solve, and how IBM Tivoli storage solutions helped you address these challenges 3. What was the ROI, or key results, from implementing a Tivoli storage solution, and what valuable lessons did you learn from the experience
Don't forget to register during early bird registration by December 16 if you do not plan to speak at Pulse and attend the conference
complimentary. Early Bird registration can save you up to $700 off
registering onsite! See you at Pulse 2012!
Well it's that time again, hard to believe, I know...PULSE call for papers has opened, and we want to have another banner year in the Tivoli Storage Sessions! Last year we were standing room only in many of our sessions and this year we hope to fill each room once again.
As for topic suggestions, we'd like to hear from customers who:
Use TSM to manage 'big data'
Have best practices, created with our Tivoli Storage portfolio that they want to share
NEW!! Technical Services Webinar: Capacity Planning in a Tivoli Storage Manager Environment
As much as customers would like to "backup everything and keep it forever", storage is not unlimited. The reality of ever increasing data growth, combined with regulatory compliance and the associated risks make the arduous task of capacity planning for backup ever more critical. A new Reporting and Monitoring tool is available with Tivoli Storage Manager (TSM). This new tool, based on IBM Tivoli Monitoring, can collect and report on historical data and is an integral part of a capacity planning regimen.
This session will demonstrate a capacity planning methodology that conforms to the ITIL Capacity Planning process description by showing how the TSM Reporting and Monitoring tool and other TSM components can be utilized for to ease the pain of capacity planning. Additionally, this session will look at strategies, like data deduplication, to reduce the amount of backup data while maintaining regulatory compliance.
Presenters: Mark Vanderboll, IBM Tivoli Global Response Team Dave Daun, IBM Advanced Technical Skills
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Enter the private storage cloud with its storage service catalog. In the consultative service engagements we’ve done, we have found that most private enterprises have something like fifteen-ish distinct data types (things like database, e-mail, video, shared files, home directories, etc). A simple storage service catalog would describe the specific service levels needed by each of these data types. Let’s take “Database” and build out the scenario.
The first thing you’ll need is a place to create your catalog of storage services. IBM Tivoli Storage Productivity Center Standard Edition is a good option (man, what a mouth full – let’s just call it TPC SE for short… hmm, I’ll probably get fired for that :-) You’re going to use the wizard to create a new “Database” catalog entry.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Storage provisioning is self-service. Most public storage services are targeted at end users like you and me who bring our credit card and provision some storage. Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use. It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
You negotiate a set of storage service levels (like “Database”) with your application owners and business units.
You create the storage service catalog entry for “Database”
Your end users request some new “Database” capacity be assigned to a particular server.
You push the “Run now” button and the capacity is auto-provisioned.
Your end user receives an invoice (complete with individual line items for each class of service in which they are consuming capacity).
You’re in the cloud now!
Stay tuned for Part III of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
I recently read an excellent post by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Efficient storage: Here’s a scenario we see…
Administrators want to snapshot the VMware datastore 4 times a day. 4 days worth are maintained – 16 total snapshots “online”
For longer term recovery, they promote one snapshot each day to a unified recovery manager. 1 month of these are maintained – 31 total snapshots “nearline”
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Simplify Data Protection and Reduce Costs With Unified Recovery Management
On September 22, we will be hosting an educational webcast that will address the challenges of providing data protection and recovery for rapidly growing amounts of diverse enterprise data. During this call, you will hear about our unified recovery management solution that can help reduce complexity, risk and costs. Included in this solution is a new simple, value-based option for procuring and managing software licenses.
Speaker: Rich Vining, Product Marketing Manager
Date: September 22, 2011 Time: 11:00 AM Eastern US
Please register for this event using this link. After registering you will receive a confirmation note with call-in instructions.
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere. (as opposed to physical array-boundary limitations)
Storage services are standardized – selected from a storage service catalog. (as opposed to customized configuration)
Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog. (as opposed to manual component-level provisioning)
Storage usage is paid per use – end users are aware of the impact of their consumption and service level choices. (as opposed to paid from a central IT budget)
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
In Part I of this post:I’ll explain the value of virtualizing storage resources. Hint: you’ve likely already done it to your server resources with some sort of server hypervisor like VMware vSphere, or IBM PowerVM, or Microsoft Hyper-V… so now let’s look at what you get from doing it to your storage resources with a storage hypervisor.
In Part II of this post: I’m going to explain how public storage clouds use management controls like service catalogs, self-service provisioning, and pay-per-use to drive down their costs. I’ll also try to offer some practical ideas for using these techniques in a private enterprise setting to gain similar efficiencies.
In Part III of this post: I’m going to explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
Ready to jump in?
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
In August, Gartner published a paper that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
Perhaps the most obvious expectations are improved efficiency and data mobility. The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration. Just recently, a hurricane hit the eastern cost of the United States. If your datacenter had been in the projected path of that hurricane and if you had implemented both a server hypervisor (let’s say VMware vSphere for your Intel servers and IBM PowerVM for your Power systems), and a storage hypervisor platform (let’s say IBM SVC), then here’s what you might have said: “Hey, the hurricane is coming, let’s move operations to another datacenter further inland…” IBM SVC Stretched-cluster allows you to access the same data at both locations giving you the ability to do an inter-site VMware vMotion and PowerVM Live Partition Mobility (LPM) move – non-disruptively. As far as the end users are concerned, their applications are running in a private cloud. For you… you avoided a disaster and got to sleep well that weekend.
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Thin provisioning: You have a client that asks for 500GB of new capacity. You’re going to give it to him as thin provisioned virtual capacity which is a fancy way of saying you’re not going to actually back it with real physical storage until he writes real data on it. That helps you keep cost down.
Compression: Same guy also asks to keep several snapshot copies of his data for recovery purposes. You’re going to start by giving him thin provisioned capacity for those snapshots, but you’re also going to compress whatever data those snapshots produce – again adding to your efficiency.
Agnostic about vendors: Because you’re providing virtual storage resources from a storage service catalog (we’ll talk more about that in Part II of this post), you have the freedom to shift the physical storage you operate from all tier-1 to a more efficient mix of lower tiers, and while you’re doing it you can create a little competition among as many disk array vendors as you like to get the best price / support.
Smart about tiers: If you shut your eyes real tight and think about the concept of a “virtual” disk that is mobile across arrays and tiers, you’ll quickly start asking questions about having the storage hypervisor watch for I/O patterns on blocks within that virtual disk that would benefit from higher tier capacity, like solid-state (SSD) or flash disk for example. A good storage hypervisor will automate the detection of such patterns and move hot data blocks to these highest tiers of storage if you have them.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
I wanted to let everyone know that IBM Tivoli Storage FlashCopy Manager for Windows Version 2.2.1 was just released!
In June of this year, I blogged about IBM Tivoli Storage FlashCopy Manager version 2.2.0. I talked about how FlashCopy Manager 2.2 provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows 2.2.0 added new support for Microsoft Exchange Server 2010 and Microsoft SQL Server 2008 R2 as well as other enhanced performance and functionality.
We continue to add more functions and features to IBM Tivoli Storage FlashCopy Manager. This past Friday (December 10th, 2010), IBM released IBM Tivoli Storage FlashCopy Manager Version 2.2.1 with the following changes:
Updates Applicable to All Platforms
Support for SVC 6.1
Support for IBM Storwize® V7000
Updates Applicable to all FlashCopy Manager components that run on AIX, Linux, and Solaris
Support for AIX 7.1 in non-SAP environments
Support for Oracle ASM on Solaris and Linux
Support for SVC and IBM Storwize® V7000 Space Efficient target volumes for FlashCopy Manager Cloning operations on AIX, Linux, and Solaris
Updates Applicable to the FlashCopy Manager for Exchange Component
Support for VSS backups to a TSM Server without needing a TSM for Copy Services or FCM license
Support for SVC and DS8000 family devices in a VMware guest OS environment
Improved support for VSS backups in clustered and offload environments
Updates Applicable to the FlashCopy Manager for SQL Component
Improvements to Query & Backup Performance in Environments with Large Numbers of SQL Servers
Support for VSS backups to a TSM Server without needing a TSM for Copy Services or FCM license
Support for SVC and DS8000 family devices in a VMware guest OS environment
Improved support for VSS backups in clustered and offload environments
The Central Depository Company of Pakistan Limited (CDC) is the only depository in Pakistan, handling the electronic settlement of transactions carried out at the country's three stock exchanges.
Business need: With numerous point management tools, time-consuming manual processes and no single help desk, IT administrators were constantly operating in a reactive mode and faced just 90 percent system availability.
Solution: IBM Business Partner Gulf Business Machines helped CDC implement an Integrated Service Management solution from IBM that increases IT efficiency while improving the effectiveness of business services.
Benefits: 90 percent reduction in average time for root cause analysis; estimated 50 percent reduction in time to support new lines of business; 98 percent improvement in service level agreement (SLA) levels.
"IBM Tivoli Storage Productivity Center gave us greater visibility into storage utilization, helping us optimize capacity planning and improve our storage ROI to save 30%" —Syed Asif Shah, Chief Information Officer, Central Depository Company of Pakistan Limited
Read the complete case study for more details on the solutions used for CDC to implement and Integrated Service Management solution. More success stories of other customer implementations of IBM technologies can be found here.
I have been working in storage and storage management my entire career (which has been more years than I want to admit) and I was recently advised by a wise co-worker to start writing about it. Although blogging has been around for quite some time and has certainly increased in popularity in recent years, this is the first time I have braved this form of communication. I stared at a blank blinking cursor for inspiration and decided to write about one of my favorite storage products, the Tivoli Storage Productivity Center.
Several weeks ago IBM announced the new 4.2 release of Tivoli Storage Productivity Center. This release includes some interesting enhancements that I am excited to see in the product. One feature that has received a lot of buzz is the lightweight storage
resource agents. TPC started down the path of lighter agents when they
introduced a slimmer, but not completely lightweight version of the
agents by moving from Java to C for enhanced performance. These new
agents were limited to Windows, AIX, and Linux. The new 4.2 release
added HP-UX and Solaris support as well as support for file and
database-level management. The new release is backward compatible
meaning that customers who want to continue using agents they set up
previously can do so. New customers are no longer required to use the
Common Agent Manager.
TPC 4.2 has introduced full support for XIV devices. TPC 4.1 did have
toleration support for XIV (basic discovery and capacity information),
but the new release you can provision, get performance information, and
use the data path explorer for your XIV machine.
If you have TPC deployed on a System Storage Productivity Center (SSPC) can upgrade at any time. Customers buying a new SSPC machine after September 3, 2010 will automatically have TPC 4.2 pre-installed on the machine.
I could say a lot more about the new TPC 4.2 release, but instead I am going to point you to a wonderful blog entry that my colleague, Tony Pearson, wrote when the new release was announced. He provides some great insights about the new features in TPC 4.2.
Wow - I made it to the end of my first blog and I am beginning to understand why blogging has become so popular. I am starting to wonder why it took me so long to write my first blog?
This is a great event that is available to get all your questions
answered... Have storage questions regarding storage software
(Tivoli Storage Manager, Tivoli Storage Manager FastBack, Tivoli
Storage Productivity Center) come and ask the experts!
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database. DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy:
Working with IBM, a hospital in Asia Pacific gained a data protection solution that meets users' data availability requirements, scales on demand to support a growing warehouse of patient data and medical images, and simplifies data migration and data recovery tasks.
The benefits of the solution include a 50% reduction in backup window; restores individual Microsoft Exchange objects in minutes; restores systems in under 10 minutes.
Read the complete case studyto see how this Asian Pacific hospital gained peace of mind with virtualixed data protection from IBM.
More success stories of other customer implementations of IBM technologies can be found here
There are a few important things to take note of. Microsoft Exchange Server 2010 included some significant changes, a number of which affect backup and restore. For example, under Exchange Server 2010:
Legacy-style backups (aka "streaming" backups) are no longer supported by Microsoft
VSS-style backups are the only supported online backup method
Exchange storage groups were removed completely
The Recovery Database replaced the Recovery Storage Group (RSG)
Database Availability Groups (DAGs) have replaced LCR, CCR, and SCR replication
Single Copy Clustering (SCC) is no longer available
With the release of Data Protection for Exchange version 6.1.2 and IBM Tivoli Storage FlashCopy Manager version 2.2 on June 4, 2010, we have implemented support for these changes. Here are details about the TSM functionality for Exchange Server 2010 that will be available on June 4, 2010:
Full Exchange Server 2010 support
Command-line Interface (CLI)
Graphical User Interface (GUI)
Database Availability Group (DAG) support
Query Exchange Information
Shows all databases with various attributes
Shows VSS component information
Full, Copy, Incremental, Differential
Back up from production database
Back up from a passive database copy (replica)
Back up to TSM Server
Back up to LOCAL snapshot
Offloaded backup to TSM
Shows all backups with their attributes
Restore from TSM Server
Restore into production database
Restore into "Recovery Database"
VSS Instant Restore from LOCAL snapshot
VSS Fast Restore from LOCAL snapshot
Mailbox Restore (IMR) and Item-Level Recovery
FlashCopy Manager and MMC Integration
Note: VSS backups to the TSM Server are enabled without the requirement for a TSM for Copy Services or FlashCopy Manager license.
Finally, a number of you were a part of the FlashCopy Manager 2.2 Beta Program and/or the Data Protection for Exchange 126.96.36.199 "Limited Availability" program, so thank you for helping us make it a great release!
In December of last year, I blogged about IBM Tivoli Storage FlashCopy Manager for Windows version 2.1. I talked about how FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows supports Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS) and how it integrates into your enterprise whether you have Tivoli Storage Manager or not. So, if you haven't read my previous blog about FlashCopy Manager on Windows, why not check that out first, then come back to learn more about the new features we just announced!
This Friday, June 4, 2010, IBM will release IBM Tivoli Storage FlashCopy Manager Version 2.2. Some of the exciting new Windows features in this release include:
Support for Microsoft Exchange Server 2010
Support for Microsoft SQL Server 2008 R2
Performance improvements for Microsoft Exchange Server mailbox history and mailbox restore operations
Performance improvements for large Microsoft SQL Server environments
Enhanced integration with SAN Volume Controller via enablement of VSS Instant Restore when there are multiple backup generations on space efficient target volumes
Did you know? FlashCopy Manager also supports UNIX platforms! Some of the exciting new UNIX features included in FlashCopy Manager Version 2.2 are:
Support for Linux x64 and Solaris SPARC operating systems
Database cloning support
Enhanced integration with SAN Volume Controller via automatic detection of deleted snapshots
A customizable agent that enables you to back up applications not directly supported by the product
During Pulse 2010 in Las Vegas, I interviewed Alistair Mackenzie from Silverstring, an IBM Business Partner. Just last week Silverstring launched TSMagic; helping you understand your TSM estate like never before... See the news article for more information on TSMagic
Checkout the live video interview with Alistair:
If you were unable to attend the live Pulse 2010 event in Las Vegas, you can still attend the Virtual event - register today. You can also check out the Pulse Comes To You Web site to see if there will be an event in a city near you.
In the second half of 2009 the International Technology Group (ITG) was contracted to do a detailed analysis of IBM and competitive storage offerings for SAP to determine a three year total cost of ownership (TCO) for each product included in the comparison. ITG developed two comparisons one for Large Enterprist accounts and a second for Midmarket accounts and chose approppriate competitive offerings for the comparisons. For the Large Enterprise accounts ITG includes: EMC V-Max systems vs. IBM DS8000 Systems and HP XP2400 vs. IBM XIV Systems. For the Midmarket accounts ITG includes: HP Enterprise Virtual Array (EVA) vs. IBM DS5000 Systems and HP EVA vs. IBM XIV Systems. ITG developed three year TCO comparisons and provided IBM an Executive summary and Detailed analysis report that can be share with customers.
Read the outcome of the analysis:
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Management Brief
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Management Brief
ITG also participated in a Webcast that is available for replay discussing the results of their studies of comparative disk systems cost for SAP environments in large and midsized organizations.
While I was at Pulse 2010 in Las Vegas, I had the opportunity of Interviewing Scott Sterry from Cristie Software Limited. Cristie Bare Machine Recovery integrates with IBM Tivoli Storage Manager to provide a Bare Machine Recovery (BMR) solution for Windows®, Linux, SUN Solaris and HP-UX.
If you were unable to attend the live Pulse 2010 event in Las Vegas, you can still attend the Virtual event - register today. You can also check out the Pulse Comes To You Web site to see if there will be an event in a city near you.
While I was at Pulse 2010 in Las Vegas, I had the pleasure of meeting and interview Nils Lau Fredriksen, CIO for the Region of Southern Denmark. Nils was one of the five CIOs that participated in the CIO panel during the Day 2 General Session. It was very interesting to hear his experience with implementing integrated service management along with the other CIOs that were on the panel.
Nils went into more depth during his presentation, on Wednesday Feb. 24th, regarding his experience of implementing integrated service management (or what he calls quality management) at The Region of Southern Demark. I attended the session and there were many questions from the audience.
I met up with Nils after his presentation to get a quick interview, which you can watch below...
Today I did several live video interviews. Let me be honest with you, it is clear that I wasn't meant to be in the journalism profession, uhm, now that is the truth!
I met many IBM clients and business partners through out this week at Pulse and today I did an interview with Roger Finney from Logicalis which is an IBM Business Partner. We did this interview right outside the expo hall at the MGM Grand hotel, so you can hear the airplanes going over from McCarran International Airport.
Logicalis has been an IBM Business Partner for over 14 years and they are both Software Value Plus authorized and Tivoli Accredited. In this video, I ask Roger to provide some details about how Logicalis has helped their customers with their storage management needs.
Today (Monday) was an action packed and exciting day.
The day started off with the General Session where Al Zollar, the General Manager of Tivoli started off with the discussion around Smarter Planet and how the world is getting smarter - Instrumented - Interconnected - Intelligent. He gave several examples of how companies are shifting to become smarter, smarter buildings, smarter healthcare, smarter citeis etc. By becoming smarter Al explained that both Risk and Complexity can be reduced.
I enjoyed hearing about Capital Region of Denmark and how they have over 1.5 Billion bytes archived and they revolutionized their storage management so that they manage all that data with 4 staff members.
The presentation then went into Integrated Service Management for Data Centers, for Design & Delivery, and for Industries which consists of
Service architectures tailored by industries
Service lifecycle management
Unified management of service requests and incidents
and the importance of... Visibility, Control, and Automation!
There were also some new storage announcements made in the general session (stop by the expo to see the demo of each product):
The other speakers included Rational General Manager Danny Sabbah to dive deeper into Integrated Service Management for Design & Delivery, Laura Sanders Tivoli Vice President of Strategy & Development for an entertaining demonstration with live code showing a smarter city (accompanied by Dave Lindquist IBM Fellow, Vice President & CTO, Tivoli Software and Dr. Wing To Vice President Strategy and Product Management, Tivoli Software). After the demo the last IBM speaker was Mike Rhodin and he went more into more depth around Integrated Service Management for Industries.
The guest speaker to wrap up the General Session was former Vice President of the United States, Al Gore.
After the General Session, the rest of my day is a blur. It was filled with attending the Storage & Information Infrastructure track kick-off, meetings with customers and business partners to do impromptu video interviews/podcasts, tweeting, reporting storage highlights for the Pulse Points daily newsletter, checking out the expo and the demos and scheduling more video interviews. I had to have walked at least 6 miles today with making trips to and from the conferenece center several times. I was a little bummbed that I wasn't able to attend as many of the customer case sessions in the storage track, I'll have to make up for that tomorrow.
Pulse 2010 got off to a great start with a very successfule Business Partner Summit. There were several Storage partners that attended the Storage Breakout session. We were even able to get some of them to sign up for professional video interviews...
During the Tivoli Storage Software Strategy and How to Sell It! session Dan Galvan VP IBM Systems Storage Marketing gave an overview of the Smarter Planet initiative and Ron Riffe provided an indepth presentation on the storage software portfolio. Partners were informed of three solution plays that they can focus on for storage. There were many questions that were asked and answered.
We also provided details on how our partners can stay connected during and after Pulse with IBM Storage networks and social media. These networks are also available to our customers and our partners' customers.
Tivoli Storage Blog for getting conference updates and daily highlights from Pulse 2010. This blog is used to discuss many different topics like data reduction, virtualization, new product announcements and more..
IBM Storage Community for manageing your contacts at Pulse, sharing links and bookmarks, and providing feedbak on the conference
IBM Storage on Twitter for listening and contributing to realtime buzz with other Pulse attendees and organizing meetups. Use #ibmpulse in your tweets. You can also follow me on twitter
on my previous blog i've discussed some of the viable approaches to data protection with virtual machines, before i dwell into the pros and cons of each approach i'd like to discuss the fundamental differences between file level and block level backup (and solicit your input :-) ).
Encapsulation is one of the basic rules for software design, simply put, it's the computer geek's equivalent of the famous "Don't ask, don't tell" policy. The idea is pretty simple, let's assume our File System is component A and our Disk System is component B. Component A and B publish a public interface that others can use, but they hide their internal mechanisms from the other components. This enables us to do some nifty tricks, such as RAID, as far as the file system is concerned it is working with a "regular disk", it is unaware of the fact that our disk system had actually taken the 100GB disk space that we defined and partitioned it into multiple strips that are actually located across 5 different disks in order to provide it (the FS) with better performance and hardware fault tolerant. There are other ways where this principle is used but you have to agree that it comes in pretty handy.
But, why do i even mention "encapsulation", and how is that relevant to File VS Block level backups?
The point i am trying to make is that the Disk level is not aware of the "file contents"and the File System is not aware of the "disk layout", this actually dictates the pros and cons of those two very different approaches to data protection.
With file level backups it's really easy to define which files you want to protect, than when the time comes, someone has to access the files and move the data they contain to some sort of data repository, in order to do that you must deal with issues such as:
- Open files
- Interdependencies between multiple files
- Identifying which (sub)files have changed
- For structured data (databases etc.), do we backup the entire file (or file group) or only the portions that have changed?
Block level backups are usually pretty straight forward, there's a mechanism that keeps track of the changed in "realtime" (this usually enables CDP, but that's a whole different story) and when the time comes the data will be moved to the data repository, but this technology has its own challenges
- Minimum granularity is usually a volume
- Hard to exclude unused file data (page file?)
- Recovering files from a block level backup
- Communicating with applications (and File System) to ensure backup consistency.
Generally speaking block level backups have a "lower overhead" than file level backup, so, if you decided to virtualize your environment and keep using agents on the individual virtual machines, you would probably want to use a block level backup solution. File Level backups are still viable (especially if they skip the "indexing" process by using an FS filter or journaling and allow for "sub file" incremental backup), but you will need to be more careful when planning your backup windows in order to prevent VM sprawl.
Stay tuned, next we'll discuss other approaches such as proxy backups
The last time I blogged I was telling you about IBM Tivoli Storage FlashCopy Manager on Windows and just how cool it was. Well, I am working on some more neat stuff and I wanted to tell you about a beta program for upcoming release of FlashCopy Manager. It is called the Beta program for IBM Tivoli Storage FlashCopy Manager. If you want to test some of the new functions and features of the upcoming release of IBM Tivoli Storage FlashCopy Manager, please contact Mary Anne Filosa (firstname.lastname@example.org) or your IBM Sales representative to get details.
The enrollment period is ending soon, so don't wait to be a part of the action!
I don't know about you, but I have been virtualizing like crazy over the last few years, humongous servers have been turning into medium sized virtual machines, test and lab environments had turned into small files running on my laptop from a flash drive. My IT department have been virtualizing even more, consolidating servers, sharing storage resources among multiple machines and converting NICs (Network Interface Cards) into virtual switches (I still haven't figured out how they did that). The move into a virtualized environment is very useful for reducing energy consumption, decreasing physical server and storage foot print and driving up processor and storage utilization but it also has some side effects when it comes to data protection. The problem begins at the same place that drove us into virtualization to begin with, resource sharing, You may now have 10 virtualized servers running on the same physical host, if your backup process consumed only 5% CPU and IO on your physical server, imagine what would happen if all 10 virtual machines kick off the backup process at the same time... There are multiple valid approaches for providing data protection to those virtual machines and I’ll try to address each and every one of them in upcoming blogs…
File based VS block based backups
Keep your existing backup methodology (Agent-based backup)
Perform the backup through the host (VMware console/hyper-v host OS)
Hardware based snapshots
Utilize vendor specific APIs that provide "agentless" or off-host backup (VMware's VCB and vStorage)
Other enhancements that might not necessarily be backup related but have to be seriously considered when virtualizing include
Deduplication (client side or target side)
Stay tuned, I’ll be going into more details around File Based VS Block Based backups in my next blog.
With only 4 weeks until Pulse 2010 - The Premier Service Management Event - Optimizing the World's Infrastructure, I thought it might be helpful to provide some details around the sessions and activities that will be available to all of you storage and information infrastructure enthusiasts out there.... Here are a few sessions that you can attend each day. Sign up for these sessions and others today (requires only an IBM.com password - you do NOT have to be a Pulse registered attendee to create a Pulse schedule online)! (Mon, 22/Feb)
The Data Juggernaut Meets IBM -- Storage & Information Infrastructure Track Kickoff
How Principal Financial Group Upgraded to TSM 6 in a Veritas Clustered Environment
Sprint Storage Virtualization Success with SVC
How France Telecom Benefits from SVC Management and Thin Provisioning
TSM 6 Upgrade Experience at Brookshire Grocery Company
How Pacific Northern Gas and Tourism Australia achieved near instant recovery while reducing costs and risks with TSM and TSM FastBack
How A Major Dutch Insurance Company Got the Most from Its Storage Environment with SVC and TPC
How OhioHealth and VCU Health Systems Leverage IBM Data Protection Software and Storage Systems to Scale for Growth
A Technical Look Inside IBM's Next-Generation Archive Appliance -- the IBM Information Archive
AT&T Automates Server and Storage Provisioning with Tivoli Provisioning Manager
Reduce your Data Storage Footprint to help Survive the Data Tidal Wave
Implementing TSM FastBack at the US Department of State
The Oakwood Healthcare System's Virtualization Story
Shipping Portal INTTRA Supports the Global Supply Chain with a World-Class IT Infrastructure from IBM
Solving the Business Challenge with Excellence: An IT TotalSolutions Approach Success Case
Go to the on-line agenda tool to see additional Storage and Information Infrastructure sessions that may be of interest to you. There are also sessions in the Expo Theater Stream. Register and attend Pulse to take full advantage of all that will be offered:
Yesterday, in discussing IBM's fourth quarter 2009 financial results, IBM CFO Mark Loughridge had this to say about Storage Software:
"Tivoli storage continued its robust growth as customers manage their rapidly growing storage data. Data Protection as well as Storage Management grew double digits, with broad-based geography and sector growth."
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
You've probably heard your mother say "you never get a second chance to make a first impression". So, since today marks my first entry into the blogosphere, I wanted to hit a home run, providing not only some interesting perspective, but also some hard facts that readers can use to potentially save some time and money.
If you have been paying much attention to developments in storage and computing infrastructure in the last few years, you have noticed a significant trend toward virtualization. Servers aren't servers any more, they are virtual machines. Tapes aren't tapes any more, they are virtual tape libraries like the IBM TS7650 ProtecTIER Deduplication Appliance. And in the area of disk virtualization, the most widely adopted approach is the IBM SAN Volume Controller (SVC).
Up until now, disk virtualization has been an enterprise-wide thought. Storage managers who are tasked with taking care of hundreds of TB's, and often PB's of disks have for years turned to SVC to help eliminate the pain of migrating data between arrays. For these administrators, disk virtualization with SVC has also helped provide a common set of management interfaces and proceedures across storage from different vendors, and has helped to create a common set of services like thin provisioning, snapshotting, and mirroring across different tiers of storage.
Not every storage manager, though, is responsible for PB's, or even hundreds of TB's of storage. Most administrators are just looking for an affordable and 'easy to manage' means of satisfying the next request for more storage on Exchange, or SAP, or... About a month ago, IBM introduced some important changes in its mid-range disk virtualization product, SVC EE, designed with these storage managers in mind.
Perhaps the best way to describe these changes is with a picture... (Click on the picture to enlarge) One of the challenges with traditional disk arrays is that they are relatively inflexible. Think about it... the arrays that have a lot of function (thin provisioning, excellent snapshotting, mirroring, etc.) are generally large, monolithic things that can take up a lot of real estate and burn a lot of power before you get to the first byte of storage. On the other hand, the arrays that are more modular -- allowing incremental growth -- generally don't offer the best software capabilities. And what's more, all of them generally charge an arm and a leg for the software capabilities they do offer.
The important thing IBM did was to package its virtual controller software in an affordable form factor and price it in such a way that mid-sized administrators can build and grow their storage infrastructure modularly. Do you need more disk capacity for a new application? Add an IBM DS3400 SAS disk enclosure. Do you have plenty of capacity but just want some more performance or connectivity? Add an SVC 8A4 controller pair. Do you have plenty of performance but just want some more capacity for archiving? Add a DS3400 SATA disk enclosure. With this sort of modular approach to scaling, the incremental cost of adding capacity can be greatly reduced.
Regardless how you choose to grow your virtual disk system, there are a valuable set of services that are all included in the base software license (e.g. no extra charge). They include:
Transparent data migration from other arrays in your datacenter to improve appliacation availability
Thin provisioning so you can get more effective use out of your storage assets
FlashCopy (IBM's name for snapshot copies) to cut down the time required for application backup and cloning. This is the newest addition to the list of included features. Prior to a month ago, SVC EE FlashCopy carried a separate price.
Although I have used IBM DS3400 disk encousures in my example, a virtual disk system of unlimited size can be constructed using any number of IBM DS3400, DS4000 or DS5000 family disks. SVC EE can also virtualize up to 250 disks from other IBM or non-IBM disk systems.
Lower incremental cost for adding capacity. Efficient SAS and SATA disks. A valuable set of software functions included in the base price. Common management from the smallest configuration to the largest. Would that help save some time and money?
Living in Boulder, Colorado, I am constantly hearing about "green" initiatives such as recycling, composting, alternative transportation, etc. Over the past several years, my family has been doing a much better job of lessening our impact on the Earth through things such as recycling, buying environmentally friendly products and even signing up for energy saving smart grid technology.
I appreciate when corporations also do their part to reduce their environmental impact by leveraging greener technologies. But let's face it, most corporations act based on the impact to the bottom line (both real or perceived) rather than the impact to the environment. Companies like IBM can make the decisions easier for clients by building products that improve performance while reducing energy or other environmental impacts.
I'm proud when IBM delivers "green" technology and thus wanted to point your attention to this video about energy efficient storage. Craig Smelser, VP of Security and Storage Development at IBM Tivoli, introduces some of the storage challenges that can be addressed with energy efficient IBM storage software solutions.
The "Ask the Experts Online Jam" is a valuable opportunity for the YOU to connect with 75+ real world IBM experts on 30+ Tivoli products. These experts, many from IBM development, are recruited to answer your questions for a concentrated period of 12 hours. (8am eastern - 8pm eastern USA)
Step 1: You have a question - usually fairly technical; Step 2: You find the expert that is best suited to answer the question by browsing for an expert by pre-defined category and product specific; Step 3: You fill in a field on the "Ask the Experts online Jam" web application to submit the question. Step 4: You receive an email answer to you question(s) and the Ask the Expert JAM web application is updated for other members to see.
Ask questions to over 75+ IBM experts on the following 30+ topics:
Datacenter Management tools: IBM Tivoli Monitoring, IBM Tivoli Composite Application Manager for Transactions and WebSphere/J2EE, Tivoli Application Dependency Discovery Manager, Tivoli Provisioning Manager, Tivoli Service Request Manager, Network, Service Assurance and Events: Tivoli Netcool Impact, Tivoli Netcool Performance Flow Analyzer, Tivoli Netcool Performance Manager, Tivoli Netcool/OMNIbus, Tivoli network Manager, Tivoli Network Manager (Precision and NetView/d), Asset Management: Asset Management for IT and Enterprise, Enterprise Asset Management Trends and IBM Maximo Industry Solutions, Security: Tivoli Access Manager, Tivoli Identity Manager, Tivoli Federated Identity Manager, Tivoli Enterprise Acces Manager Single Sign On, Tivoli Compliance Insight Manager, Tivoli Directory Server, Tivoli Key Lifecycle Manager, Tivoli Security Information and Event Manager, Tivoli Security Policy Manager, Storage: Tivoli Storage Flash Copy Manager on AIX and Windows, Tivoli Storage Manager, Tivoli Storage Productivity Center, Tivoli Storage Mangaer (TSM) Fastback, z/OS: Netview for z/OS, OMEGAMON, Tivoli Security for Systems z: Tivoli zSecure Suite
Click here for more information. I personally will be available from 8am to 2pm covering IBM Tivoli Storage FlashCopy Manager on Windows but there will also be many other storage experts available for the entire 12 hours. Please join us!
We have gathered a team of SMEs from various areas of the business to discuss a variety of topics, spanning different interest areas including customer success stories, upcoming events, Business Partner spotlights, technical tips and tricks, product strategy, roadmaps and hot topics -- and of course, topics of interest to you!
Introducing the team!
BJ Klingenberg: Senior Technical Staff Member - Storage Software, IBM Software Group BJ has over 25 years of storage software strategy and development experience. He has held various technical and management positions, nearly all of which have been related to storage software. His experience in Enterprise storage management includes DFSMS, DFSMShsm, DFSMSdss, and also Tivoli Storage Manager, Tivoli Storage Productivity Center (TPC) as well as System Storage SAN Volume Controler (SVC). He has also been involved in projects which apply ITIL management best practices to Enterprise Storage Management. BJ is currently focusing on storage archiving solutions. BJ is a graduate of the University of Illinois Urbana/Champaign where he received a Bachelor of Science degree in Computer Science, and holds a Master of Science Degree in Computer Science from the University of Arizona
Dave Rice: Business Partner Marketing, Tivoli Storage Software Dave currently works in IBMs Worldwide Software Group where he drives Business Partner Marketing for Tivoli storage software and also has a focus on Asia Pacific and Japan geographies. In this role, Dave influences Business Partner sales pipeline through, lead/pipeline analysis, progression activities, partner communications, and implementing programs that provide Business Partner Opportunity Identification. Dave has been in a broad set of storage software marketing roles for the past 13 years, and has 35 years with IBM. Outside of IBM, Dave's interests include astronomy, as well as home and life improvement projects.
Del Hoobler: Senior Software Engineer Del is a Senior Software Engineer that has worked for IBM for over 20 years in software design, development and services. For the past 13 years, he has worked on designing and developing software products for the IBM Tivoli Storage Manager (TSM) suite of products. Most recently, Del was the technical development lead for the TSM Windows snapshot (VSS) support for Microsoft Exchange Server and Microsoft SQL Server. Del enjoys working with people and helping solve their complicated IT problems.
Devon Helms is currently an intern with the IBM Tivoli Software group and a second year MBA candidate at the Paul Merage School of Business at UC Irvine. His studies are focus on business strategy and corporate finance. Before returning to the academic world to pursue his MBA, Devon was a business operations and technology consultant. He has been involved in hundreds of engagements, analyzing and improving his customers business processes. After his studies are complete, Devon wants to continue to help clients improve the performance of their businesses through business process and financial analysis. In his free time, Devon is an avid marathon runner, rock climber, and SCUBA diver. Devon lives in Lakewood, CA with his lovely wife, Shana and his 8 year old Siberian Husky and faithful running partner, Frosty.
Greg Tevis: Tivoli Storage Technical Strategist Greg has over 27 years in IBM storage hardware and software development. He worked in ADSM/TSM architecture and technical support in the 1990s and was one of the original architects of IBM's storage resource management solution, Tivoli Storage Productivity Center (TPC). He currently has responsibility for technology strategy for all Tivoli Storage and was involved in all of the recent IBM Storage acquisitions including XIV, Diligent, FilesX, Novus Consulting, and Arsenal Digital.
Jason Davison Jason has been the product manager for the Tivoli Storage Productivity Center (TPC) family since joining IBM in 2006. Prior to joining IBM, Jason was a product manager at EMC and Prisa Networks, responsible for the road map and strategy of various storage management offerings. When not helping define the direction for TPC, Jason acts as the President for Classic Soccer Club, a youth soccer club where his son currently plays.
John Connor: Product Manager John is the Product Manager for IBMs flagship data protection and recovery offerings, the Tivoli Storage Manager family. During Johns tenure as product manager, TSM has experienced strong growth; growing faster than the overall market, and gaining market share. Prior to joining the Tivoli Storage Manager team in 2005, John helped drive the business strategy for IBM Retail Store Solutions. Prior to that, John had product and marketing roles in various IBM software businesses including WebSphere and networking software. John has an MBA from Duke University and an undergraduate degree in electrical engineering from Manhattan College. In his spare time, John enjoys competing in triathlons and has successfully completed an Ironman triathlon.
John R. Foley Jr.: Product Marketing Manager John is currently a marketing manager within IBM's Tivoli storage software marketing team. John has over 20 years of experience in the areas of storage hardware, storage software and system networking. He has held positions in management, product line management, strategy, business development and marketing. In the past 10 years, he has served on multiple storage projects including SAN storage (fibre channel & iSCSI), Network Attached Storage (NAS) and fibre channel switch offerings. Most recent projects include the introduction of IBM's System Storage N series portfolio stemming from the NetApp OEM agreement and the release to market of IBM's newly introduced Tivoli Storage Productivity Center Version 4 and IBM Information Archive Version 1.
Kelly Beavers: IBM Storage Software Business Line Executive Kelly joined the IBM Storage Software team in 2004 as Director of Strategy and Product Management for Storage Software and Solutions. Her team is responsible for guiding the development and release of products that capitalize on market/technology trends, and for defining and executing tactical go-to-market plans for IBM storage software solutions across both the Tivoli and Systems Storage brands. Kelly has 28 years with IBM where she's held a variety of roles including Finance, Pricing, Tivoli Channel Development, Director of Customer Insight, managing Market Intelligence, Customer Relations and Marketing Operations. Kelly is married with two daughters, ages 19 and 12.
Matt Anglin: Tivoli Storage Manager Development Matt has been a member of the Tivoli Storage Manager Server Development Team for 15 years. His areas of expertise include data movement to and within the server, deduplication, shredding, and DB2 interactions. He is the AIX platform export in TSM, and is knowledgeable about other Unix, Linux, and Windows plaforms. Matt lives in Tucson, Arizona.
Matthew Geiser: Manager, Storage Software Product Management Matt joined IBM in 2001 and has worked in product management and product development for Storage Software offerings including SAN Volume Controller, Tivoli Productivity Center, Tivoli Storage Manager and IBM Information Archive. Matt's current responsibilities include managing the product management team for the storage infrastructure management offerings. Prior to IBM, Matt worked in a variety of operations, project management and software development roles in the banking and energy industries.
Milan Patel: Senior Product Marketing Manager Milan is responsible for Product Marketing of IBM storage software for virtualized server environments, storage clouds and of course every day issues in storage management like backup, recovery, archiving and replication. Milan has been with IBM for over 6 years working in server and storage systems and storage software marketing groups. Prior to that, Milan spent 13 years in various capacities from development to product management of various server subsystems and systems management.
Richard Vining: Product Marketing Manager Rich is the Product Marketing Manager responsible for the IBM Tivoli Storage Manager portfolio of products. Rich joined IBM in April 2008 as part of the acquisition of FilesX, where he served as Director of Marketing. Rich has more than 20 years of experience in the data storage industry, holding senior management roles in marketing, alliances, customer support and product management at a number of leading edge companies, including Signiant, OTG Software, Plasmon and Cygnet. Rich enjoys eating, drinking, travelling and golfing (but doesn't everybody?)
Rodney Fannin: Worldwide Channel Manager, Tivoli Storage Software Rodney has over 15 years of experience in working with Business Partners. Primary responsibilities include refining the channel strategy for Storage software and developing sales and marketing tactics to increase reseller revenue worldwide. Rodney is also a contributing author for the BP Spotlight on our blog.
Roger Wofford: Product Manager Roger is currently a Product Manager in Tivoli Storage Software. He has experience in Manufacturing, Development, Marketing and Sales within IBM. He enjoys golf, swimming and the Rocky Mountains. Roger plans to blog about how customers use archiving solutions in their storage environments.
Ron Riffe: IBM Storage Software Business Strategist Ron is currently the business strategist for IBM Storage Software. During the last six years, Ron has been devising and implementing IBM's storage software strategy with a focus on creating greater client value through integrating IBM storage software and storage hardware offerings. Ron has managed storage systems and storage management software for more than 23 years, holding positions in senior management, product line management, strategy and business development for both IBM System Storage and IBM Tivoli Storage. Ron has written papers on the synergies of storage automation and virtualization and frequently speaks at conferences and customer locations on the subject of storage software. Prior to joining IBM, Ron spent 10 years as a corporate storage manager for international manufacturing firm Texas Instruments after receiving a B.S. in Computer Science from Texas A&M University.
Shawn Jaques: Manager, IBM Tivoli Storage Product Management Shawn has been in his current role as manager of storage software product management for nearly three years. The team is responsible for product strategy, content, positioning and pricing of IBM storage software solutions. Prior, Shawn had product and market management roles in other Tivoli product areas as well as a stint in Tivoli Strategy. Before joining IBM, Shawn was a Consulting Manager at Cap Gemini consulting and an Audit Manager at KPMG. Shawn has a Master of Business Administration from The University of Texas at Austin and a Bachelor of Science from the University of Montana. He lives in Boulder, Colorado and enjoys fly-fishing, skiing and hiking with his wife and kids.
Terese Knicky: Analyst Relations Tivoli Terese is with Tivoli's analyst relation team covering Storage, System z, Job Scheduling and IBM's General Enterprise solutions. Terese was born and raised in Omaha, NE and transplanted to Texas where she enjoys watching her two boys play college football.
And finally, let's talk about me. I'm Tiffeni Woodhams and I have been with IBM for nearly seven years. Currently, I am a Tivoli Storage Marketing Manager where I am responsible for general marketing activities, ranging from pipeline measurement and tracking, providing marketing execution guidance and communications to the geography teams; Tivoli Storage Social Media lead and co-lead for IBM Storage Social computing strategy. I also work on major launches like Dynamic Infrastructure and Information Infrastructure providing the storage messaging and linkages. Prior to this role, I have held several other marketing positions including Tivoli Provisioning Go-to-Market Manager, Benelux Software Marketing Manager focusing on Tivoli, WebSphere, and Lotus, Americas Tivoli Marketing Manager, and Tivoli Launch Strategist. In my spare time, I enjoy playing sports (basketball, softball, and golf), coaching JV girls basketball, riding horses, and spending time with family and friends.
Now that you know a little background on each of the team members, we hope that you will let us know some of your interest areas when it comes to IBM Storage and IBM Tivoli Storage Software solutions. Please post comments to this blog and let us know what you want to hear about.
Some topics we will be discussing in the next month include: Pulse 2010, the Premier Service Management Event Data Reduction - the steps to get to where you want to be Archiving - why you need to do it Upcoming Webinars Unified Recovery Mangement New Product announcements and roadmaps.
Thanks and we look forward to hearing your feedback.
Me: What are the hot topics in the area of storage and information infrastructure today?
John: The hot topics in the area of storage and information infrastructure today are how, in today's tight economy, customers are leveraging storage in their information infrastructure to improve scalability, addressing the performance of their storage management assets, cutting capital expenditures by reducing duplicate data to lower storage capacity needs and simplifying the overall management of their storage infrastructure.
Me: Which topics would you like to see presented at pulse are
John: Ideally I would like to see sessions at Pulse that highlight customer success stories, how Tivoli storage management and/or IBM storage solutions helped customers address the challenges we discussed above.
Me: Who are good candidates for submitting abstracts and why?
John: The best candidates to talk about these successes are the folks who implemented them, which would be our customers. Customers are able to discuss their return on investment and how the IBM storage solutions are benefiting them in their everyday business operations. Another good candidate would be our business partners, accompanying and co-presenting with their clients on the IBM storage solutions they've implemented.
Me: What are you looking for in a good proposal?
John: As I mentioned earlier about the topics I would like to see presented, a good proposal is a customer success story around IBM storage solutions, including Tivoli storage management software, and/or storage hardware and storage services. This proposal should describe the initial pain points or problems that existed, how our solutions helped and the lessons learned that could be applied to other customer situations. This type of proposal and session at Pulse will help others learn from each other.
Me: What are the benefits of submitting an abstract for Pulse?
John: Submitting your abstract is a great way to gain visibility for your work, and your particular solution. Customers that submit abstracts and that are selected will receive a complimentary pass to attend Pulse at no charge ($1,995 value) and admission to on site VIP client lounge. Attending Pulse is not only a great way to share your companys success by implementing IBM storage solutions, but it is also a great education and networking opportunity.
Me: What is the deadline for submitting call for speaker abstracts?
John: The deadline to submit your abstract is Nov. 20th. Dont delay, submit your proposal today.
With such great guidance from John, youre sure to write a perfect proposal. If you have any questions on submitting abstracts for Pulse or want feedback on an idea, just leave a blog comment. Also, be sure to check out this justification letter if you need that extra edge to convince your boss of the value of attending Pulse. I hope to see you there!