Between the 19th and the 23rd of May, Las Vegas was once again host to the IBM Edge conference. Throughout the five days of the event, more than 5,500 technology leaders and practitioners from 55 countries participated to hundreds of sessions and discussions around the latest developments in Storage Management, IT Operations Analytics and Cloud Optimization.
Right from the beginning – the General Session on the first day of the conference – leading IBM experts like Tom Rosamilia (Senior Vice President, IBM STG) emphasized what was to be the underpinning theme of the entire event: ‘Infrastructure matters because business outcomes matter’. Why? Also on the first day of the event, IBM revealed some of the results of its latest wide ranging study of organizations (due to be released in July 2014); thus, according to the results, even though seven out of 10 organizations recognize the importance of IT infrastructure for competitive advantage and revenues, less than 10% of them say their existing IT infrastructures are fully prepared to address the proliferation of mobile devices, social media, data analytics and cloud computing.
May 19th also brought EdgeTalks at IBM Edge: a special session hosted by Surjit Chana (Vice President & Chief Marketing Officer, IBM STG), whereby TED Talks speakers Ron Finley, John Wilbanks and Peter Singer shared their inspiring insights and experiences with innovations in fields ranging from organic food forests to the latest cybersecurity concerns that affect the world. You can see the replay here.
On another front, the MSP Forum hosted impactful discussions around strategies for flexible hybrid cloud implementations and the priorities, challenges and opportunities that MSP decision makers have when it comes to software defined automation.
By the third day of IBM Edge 2014, more than 55 million impressions worldwide had been generated by the social media activities around the conference. The Social Lounge set up at the event encouraged all participants to be active in social media and the associated crowd chat led to many interesting discussions between analysts, experts and practitioners. Technical Edge - more than 150 expert technical sessions and hands-on labs, covering 14 technical tracks (including Software Defined Environments (SDE) and Storage), Winning Edge – sessions dedicated to providing cutting edge education, insights and opportunities for Business Partners and Sellers, and Executive Edge – talks around overarching aspects of Cloud, Big Data and storage infrastructure, were in full swing, and they were followed, on May 22nd and 23rd, by further engaging sessions around storage, IT Operations Analytics, Workload Automation, ITSM and more.
To wrap up, here are some more replays from the conference that you can watch at your leisure: http://www.livestream.com/ibmedge. Here’s looking forward to next year’s edition and hoping that it will prove as much fun for participants as the one that’s just ended.
With more than 90% of decision makers trending to cloud implementation to drive improved business outcomes, it is important to understand the key challenges organizations face when considering Tivoli Storage Manager (TSM) in the cloud including:
How to leverage cloud storage as a storage tier for TSM
Moving data into more cost effective architecture while improving recovery
A recent survey was completed to understand how users plan to use TSM in the next three years. The trend indicates that pool usage for primary storage are -- 1). TSM dedup and 2). tier with #5 being identified as cloud. Cloud as #5! The projected growth for XaaS (anything as a service) will definitely have an impact for survey results in te next few years.
Another survey provided a view of the trend for disaster recovery (DR) pool usage over the next three years as follows: 1). TSM replication 39% (tape usage decreased); 2). Node replication increase 30%; 3). Copy storage pool backups to cloud increased to 32%.
Thus, to understand the true value that Data Protection as a Service can provide -- e.g. Xaas [BaaS, DRaas, AaaS, Iaas] the consideration includes:
Partner ecosystem -- build upon open interfaces
Transmission of data to cloud -- be selective about data that goes to the cloud
Security -- encryption in flight / at rest
Economics -- in house TSM, likely cost less
The aspects of cloud data protection solution can be summarized as:
Service Model -- services offered; managed by customer, IBM or 3rd Party
Do you need a more simplified approach for deploying Tivoli Storage Manager (TSM) for Virtual Environments (VE)? Are you experiencing issues because data backup ratios promised by IBM marketing for TSM cannot be matched? Do you want to know more about how backup experts with multiple deployments are handling? Then, reading this blog can help address your questions while removing some frustration.
First, it is important to note that the solution design case study is based on TSM v6.4, yet is also valid for TSM v7.1 (announced Oct 2013) due to the following criteria:
Scalability and performance
Data reduction capabilities (deduplication)
Administration and monitoring
Hardware snapshot (flashcopy)
Next, the case study covers a client environment with small scale number of VMs (i.e. 500); yet, the findlings / results apply to environments up to 2000 VMs. The detail outlines a strategy leveraging three elements: Transition, VSphere architecture, and Backup Storage Device Selection (see further description of elements at bottom).
First Things First -- Case Study Environment Specs:
500 VM deployment (small scale); yet solution design is same for larger environment up to 2000 VMs
Demonstrates how to achieve objective
Works within infrastructure constraints -- network bandwidth limitation
Multiple retention requirements
Achieve reliable, successful backups
Dedup options include TSM native or appliance.
The dedup ratios should be benchmarked to insure realistic estimates are used.
When considering cost and restore performance, it is important to evaluate the trade-off between performance and storage costs; consider co-location by file space (VM) with tape virtual tape library (VTL) as well as specify critical VMs configured as exception for management.
Now About Those Ratios
If you've attained a dedup ratio closer to 3:1 that can actually equate to 25:1 reduction ratio when you consider whether reduction is based on change data versus the data of entire environment. The case study also examined network constraints, VTL selected and sizing.
Therefore, the main takeaway -- with the right strategy, use of TSM blueprinte and the following simplified estimate considerations:
Occasional full backup
Occasional image restore
Initial phase-in contingency
DR (disaster recovery) contingency
TSM for VE delivers a cost effective, simplified means to achieve reliable, successful backups.
Note: Optimal design will differ based on environment. Published case study made available upon request.
Global Program Director
IBM Software Group
Day 2 at IBM Edge 2014 focused on how clients, Business Partners and IBM are working together to build smarter infrastructures to meet the business challenges discussed on Day 1 (cloud, analytics, mobile and social).
Chris O’Connor, @chrisoc_IBM, Vice President of Strategy & Engineering, IBM Cloud & Smarter Infrastructure, spoke about the need to seamlessly extend infrastructures from what organizations own today to what they’ll need in the future. He recommended two approaches:
Cloud-enable existing workloads
Think about ‘cloud first’ for new workloads
The idea is to accelerate time to market and be able to make real time actionable insights. With 70% of enterprises planning to pursue hybrid clouds by 2015, according to a 2013 report by Gartner, a two-pronged approach makes sense.
Andrea Nelson, Director of Storage Marketing at Intel collaborated Gartner’s estimate, saying an estimated 50% of organizations less than 10 years old are putting their IT infrastructures on the cloud today.
Chris spoke about the importance of standards, such as OpenStack, that help organizations quickly assemble Software Defined Systems from components, rather than building clouds a stick at a time. With new development platforms such as IBM’s Code name: BlueMix, organizations can construct enterprise-capable cloud applications faster, without having to deploy a cloud infrastructure.
Mike North, Sr. Director of Programming for the National Football League spoke about the importance of speeding up the infrastructure to enable analytics. ‘Time to truth’ is critical for analytics. With faster processing, the NFL is able to look at 100s of potential schedules and choose the one with the best potential outcomes for their constituents. IBM’s Arvind Krishna suggested that traditional analytics is like driving a car by looking at the rear view mirror – You can only see where you’ve been. Predictive analytics helps you see into the future, react faster, and achieve better business results
Maria Winans, @mariawinans, IBM VP of Social Business, spoke about how IBM and other organizations are driving people-centric engagement for new profit channels. She also spoke about the importance of analytics, saying you can’t personalize customer experiences if you can’t do the required analytics. Maria offered 3 suggestions for successful social business initiatives:
Build shared value
Protect your brand
New mobile applications offer the opportunity to improve customer satisfaction and customer loyalty, as well as generate new revenue. Rapid transformation is happening across industries and geographies. IBM estimates there will be over 1 trillion connected objects and devices by 2015. Mobile applications are enriched by cloud, analytics and social business initiatives.
Storage virtualization and Software Defined Storage
Storage virtualization is the foundation for Software Defined Storage. Virtualization provides an abstraction layer between physical storage and applications that use it. The result is a storage infrastructure that can grow and change without impacting users or applications. Software Defined Storage will be required t manage the vast amounts of data organizations expect to manage in the years ahead.
Steve ‘Woj’ Wojtowecz, @steve_woj, IBM Vice President, Storage Software Development, shared new research from ITG that analyzed storage TCO using IBM, EMC, and VMware storage management solutions. ITG highlighted 4 issues that significantly impact storage TCO:
Storage software costs
Storage administration costs
IBM Virtual Storage Center users were far more successful than their peers using EMC or VMware storage management solutions:
In large enterprises, storage TCO was 72% lower with IBM than EMC
In midsized environments, storage TCO was 35% lower with IBM than VMware storage management.
Jose Garcia, Manager of Enterprise Storage and VMware at UCLA Health System, discussed his storage transformation project. Storage virtualization enabled rapid deployment of an Electronic Health Records system that improves patient care and improves organizational efficiency. Storage virtualization also reduced storage costs and enabled rapid data growth. Improved efficiency saved enough to fund a 3rd data center that will improve resilience and flexibility.
Collaborators wanted. No Eeyores. No squirrels.
Snehal Antani from GE Capital spoke about the importance of delivering IT at market speed, and with commercial intensity. He offered a strategy of dealing with important groups of people in the organization:
Kings and Queens
Collaborators can accelerate change. Identify your collaborators and put them on a pedestal
Cynics are like Eeore in Winnie-the-Pooh. They’ll tell you why change is hard, and focus on what might go wrong. Ignore your cynics.
Kings and Queens are executives and managers who are eager to be offended. They resist change that may impact their empires. They’re a small, but vocal, group. Don’t give them a megaphone.
Snehal also pointed out that technologists can get distracted by new technology, even if it isn’t essential to simplify or accelerate IT delivery. It’s like yelling, ‘Squirrel!’ to distract dogs, as in the movie, Up. GE Capital has signs that say, ‘No Eeyores’ and ‘No squirrels’.
Bottom line: Infrastructure matters
Can the right infrastructure help you build competitive advantage? Yes, of course. Infrastructure matters.
About the author
Mike Barton is a worldwide storage marketing manager at IBM. Mike is a former IT specialist with Gartner TCO and ITIL certifications. The opinions expressed herein are his own.
ITG Management Report: Cost/Benefit Analysis of IBM Virtual Storage Center Compared to EMC Storage Virtualization Solutions
[Software Defined Storage (SDS)] is getting a lot of attention lately by press, analysts and technology providers such as IBM, causing organizations large and small to take notice. SDS describes a set of storage access and data management services that can deliver what IT administrators are most interested in these days:
Lower storage costs
Less reliance on specific storage systems
Simplified data and storage management
Improved utilization of existing resources
International Data Corporation (IDC) published a [taxonomy for Software Defined Storage] which defines software-based storage as a storage software stack running on commodity, off-the-shelf computing hardware. SDS should offer a full suite of storage services and federation of the underlying storage to enable data mobility, according to IDC.
The interesting thing is, while the name Software Defined Storage is relatively new, IBM has been delivering technology and client solutions that match the SDS definition for over a decade.
Matching IDC’s definition, [IBM SAN Volume Controller], introduced in 2003, is an x86-based appliance running Linux code, providing federated storage virtualization across heterogeneous storage platforms and enabling advanced storage services. SAN Volume Controller has been proven to scale to multiple petabytes. This core technology is also included in IBM’s midrange Storwize storage systems. To date, over 55,000 SAN Volume Controller and Storwize systems have been shipped worldwide, making IBM one of the most popular business class storage virtualization solutions.
If you can’t attend Edge, look for video interviews with Brian Jeffery, Managing Director of International Technology Group, and Steve Wojtowecz, VP of Storage and Network Management Software Development, on [TheCUBE, by Wikibon], live on Monday, May 19 and afterwards on demand.
By now, everyone in the IT Industry is convinced about the benefits of virtualization. Server virtualization – Yes! But storage virtualization? That’s not an easy one!
In Server virtualization, we simply divide one physical server into several virtual environments that saves you lots of money and helps wring the best out of your infrastructures, but did you know that an inadequate storage system can actually cause the economic benefits of server virtualization to fall through because virtual machines can consume large amount of storage?
So, why is it that Storage Virtualization still isn’t as popular as Server Virtualization?
In storage virtualization, we group physical storage from multiple network storage devices so that they look like a single storage device that can be managed from a single console. This can raise several complexities when –
Data center has storage products from different vendors
Some vendors provide storage virtualization for their own hardware only
You need availability of virtualized storage features from vendors in non-virtualized offerings
Throw in the further dilemma of choosing where to put your virtualization: hosts? network? or arrays? Or perhaps being bound by vendor lock-in clause?You probably have been buying additional devices and systems on an ad hoc basis to meet new storage demands, but think of the advantages you could gain with a storage architecture (read storage virtualization) that allows you to upgrade when needed and in a cost effective manner! Now that’s enough to convince you on the virtues of storage virtualization……
Let’s find out why IBM’s VSC is the best bet in the market.
A recent ITG report compared IBM VSC solution to other large vendor (EMC, VMware) over a period of 5 years to determine savings,
And the verdict is – IBM Virtual Storage Center (VSC):
The only solution to address large-scale multivendor virtualization
Averaged 72 percent less than EMC for overall costs of ownership
Averaged 35 percent less than VMware over 5-year total cost of ownership
Supports more than 200 platforms -- all of the major vendors -- as well as from many smaller vendors.
You can get behind the calculations and analysis done in the two ITG reports to understand the how and why of it. As one user shared when asked to describe VSC in simple terms, “It works.”
In the past two years that IBM acquired Butterfly, it generated hundreds of Analysis Engine Reports (AER) analyzing billions of gigabytes and established facts about Tivoli Storage Manager (TSM) that should make competition sit up and notice.
The Backup Analysis Engine report from Butterfly Software uses light-touch, agent-less software technology to analyze existing heterogeneous data backup environment. It is a non-intrusive analysis based on empirical production data collected in minutes and incorporated into the Backup Analysis Engine report from IBM Butterfly Software.
Why is Butterfly important? Gartner Magic Quadrant: Backup and Recovery 2013 Competitive analysis says between 2012 and 2016, one-third of organizations will change backup vendors due to frustration over cost, complexity and/or capability. To be able to say conclusively that TSM solution can save backup infrastructure costs by as much as 38% when compared to some of the other competitive products opens the door for IBM to go get these 33% of the organizations looking for a change.
AER is the Key More demand for AERs is expected with the launch of the automated “self-service” AER generation model. Scheduled to go live at the beginning of 2H 2014, it will scale out as a service to IBM and its Business Partners. These facts drive home the fact that Butterfly AERs have metamorphosed into a well accepted and standard approach to storage infrastructure analytics.
Meet the Butterfly Storage and Backup Assessment Team at Pulse 2014 If the butterfly flutter has caught your interest, visit Pulse 2014 on Feb 23-26 at Las Vegas and meet the folks who deliver Butterfly Storage and Backup Assessments in the IT Optimization section of the IBM booth. Find out how your company can use business analytics to dramatically lower the cost of running your backup and recovery or primary storage infrastructure.
I am often asked... "When can I use FlashCopy Manager with my EMC disk array?" (substitute "EMC" with your favorite vendor)
With FlashCopy Manager for Windows, you can leverage hardware snapshots for any disk array that has a VSS Hardware Provider. This is because Windows has a built-in architecture (referred to as "VSS") that enables pluggable snapshot support. We wrote a developerWorks article that explains how this works and how it integrates with TSM a few years ago. (Note: This article refers to "TSM for Copy Services" instead of "FlashCopy Manager" because it was written before the product name was changed.)
But, with FlashCopy Manager for UNIX and Linux and FlashCopy Manager for VMware, you must wait until support is added for your desired disk array. Last year, IBM partnered with Rocket Software to develop a device adapter pack that plugs in to FlashCopy Manager for UNIX and Linux and FlashCopy Manager for VMware to extend support to more storage devices. You install it on top of an existing FlashCopy Manager (version 4.1 or later) installation on the application server being protected by FlashCopy Manager for UNIX (or on the proxy backup server in case of FlashCopy Manager for VMware) and configure it to talk to the storage device. After that, you are able to leverage the power of FlashCopy Manager snapshot protection for the hardware device supported by that device adapter pack!
At the end of last year, Rocket Software released support for EMC Symmetric (VMAX and DMX). They are planning to add more disk arrays in 2014. If you have devices that you want to see added, contact Rocket Software.
With a new school year underway, vacation season for many come and gone and the Labor Day long-weekend upon us in North America, entering September marks the unofficial end of summer. For many this is a somewhat depressing time of year as we realize that colder temperatures and the on-set of winter aren’t far off.
However it’s not all bad news. Some of us prefer outdoor activities in the fall and winter months and when it comes to business, the fall presents a renewed interest in sharpening our skills and seeking networking opportunities by attending industry conferences and events.
For Storage professionals in North America an ideal opportunity comes in the form of Storage Decisions New York on September 16 & 17. Storage Decisions New York plans to bring together over 500 end-users, independent experts, analysts, and top solution providers to engage in thought-provoking presentations, interactive networking opportunities, and sponsor showcases featuring the latest trends and technologies impacting the storage industry. The 2-day conference, scheduled for is the only place you will find the industry's foremost independent experts – and the most qualified group of storage professionals – under one roof.
As a platinum sponsor of Storage Decisions New York, IBM will have a multi-faceted presence at the conference with ample opportunities to engage with the storage community. One of the highlights is our Tech-in-Action talk, where IBM’s Storage Software Business Strategist Ron Riffe will outline IBM's point of view on The Critical Decisions for Improving The Economics of Storage. Ron will touch on a range of considerations including the need for improved administration, the role of software-defined and the impact of flash – just to name a few.
Over the course of the two-day event, IBM storage experts will be available in booths 24 and 25 to meet attendees and discuss practical solutions to today’s storage challenges. The IBM booth will also be where attendees can pick up their complimentary conference USB key which will loaded with conference-wide materials and presentations.
Storage Decisions New York is worth taking a look at as a great way to kick-off the fall conference cycle. If you're planning to attend stop by and visit us. If you happen to be on the west coast and concerned that New York is too far to travel, don't worry Storage Decisions is stopping in San Francisco on October 30.
Last Monday, EMC announced ViPR as its new Software-defined Storage platform. Almost simultaneously, Chuck Hollis described it as ‘Breathtaking’ in his usually excellent blog. I must admit, one thing I routinely find breathtaking about EMC is their approach to marketing. They have a knack for being able to take unexceptional technology (or, as in this case, combinations of technology and theories about the future), and spin an extraordinarily compelling story. With all seriousness and without tongue in cheek… Nicely done EMC! Read more.
IBM Edge2013 is fast approaching and while the conference includes four events within an event to appeal to wide range of attendees, the cornerstone of Edge from my perspective is the rich technical content to be delivered within Technical Edge.
TechnicalEdge is a 4.5 day technical event for IT professionals and practitioners focused on sharpening expertise, discovering new innovations and learning industry best practices. You can check out the published agenda of the over 350 sessions spanning 16 tracks Technical Edge that are sure to hit on the top IT trends, opportunities and challenges we collectively face.
Specifically related to Cloud & Smarter Infrastructure, we’ve embedded over 50 technical sessions, demos and hands-on labs specifically focused on Tivoli solutions with the majority going deep on Tivoli Storage capabilities. Further there’s an additional 30 related sessions of interest to Tivoli users (i.e. IBM Storwize V7000, IBM Flex System Manager, IBM GPFS, etc.). These 80 sessions are scattered across the 16 tracks within the Technical Edge conference. (Hint: You can find the majority of these sessions within the Business Continuity and Systems Management tracks)
Some of the session highlights I’m looking forward to seeing are:
“IBM's New Tivoli Storage Manager Operations Center” – Our new TSM GUI!
“IBM Tivoli Storage Manager and the Cloud” – This session will describe TSM’s multifaceted cloud strategy
“Protection of Virtual Machines using Tivoli Storage Manager for Virtual Environments and Tivoli Storage FlashCopy Manager”
Tivoli Storage Manager for Virtual Environments - Data Protection for VMware: Solution Design
Introduction to IBM's Virtual Storage Center (VSC) - Learn how you can gain storage efficiencies and grow your business using VSC’s capabilities.
How IBM SVC, Storwize V7000 and TotalStorage Productivity Center are used in real life to migrate data centers
Additionally, for those that want to roll up their sleeves and get their hands on some of these solutions, I would recommend the following hands-on labs:
IBM’s New Tivoli Storage Manager Operations Center hands-on lab
IBM Tivoli Storage FlashCopy Manager: New Features and Operation in Version 3.2
A double session - IBM Tivoli Storage Manager for Virtual Environments: Protecting and Recovering Virtual Machine Data
We will also have an interesting “Birds of a Feather (BOF)” session on Business Continuity led by Sanjay A. Patel – focused on using the Tivoli Storage Manager suite to help you proactively protect your data.
I encourage you to join us at the Technical Edge and enhance your knowledge of our Tivoli Solutions and look forward to “getting technical” in Vegas. Learn more and register today.
If you are like most of the clients I deal with, you are starting to recognize the storage part of your infrastructure represents a BIG opportunity for improvement in 2013 – in agility, in efficiency, and in cost. When demand (data growth) outpaces supply (ability of hardware vendors to increase areal density driving down costs) as dramatically as it has begun to do, something has to change in the way storage infrastructure is approached in order to help balance the equation again. That ‘change’ creates a perfect economic environment for vendor innovation resulting in creative new solutions for clients. If you have been paying attention to the storage space, you’ve noticed an increased investment pace as vendors explore technical innovations and try to explain these innovations to potential clients. One of my biggest frustrations though is when the industry can’t settle on terminology for describing a solution approach leaving clients thoroughly confused and paralyzed. Read more...
As I’ve been working with many members of the Tivoli Storage team to coordinate our involvement atIBM Edge 2013and as the conference nears it struck me the other day -- Edge really does have something for everyone. While the historical focus of this event has been storage—and storage content remains particularly strong—this year, IBM is expanding that focus to address related areas of IT optimization as well: cloud, smart analytics and big data, business continuity, and many more.
And not only the IT topics being covered are expanding -- so are the types of audiences that will have an interest in Edge. With four events under one roof, each aimed at the needs of different audiences, Edge 2013 promises not to disappoint – regardless of the role you have in IT. Just in case you aren’t sure if Edge is for you, below is the summary of the “four events within the event” and the highlights of Tivoli Storage in each:
Executive Edge:Executive Edge is a 2.5 day event for IT executives and leaders focusing on discovering new innovations for managing storage growth, accelerating cloud deployments, unlocking the insights from big data, and securing critical information and processes. Deepak Advani, General Manager of IBM Tivoli Software, who takes the stage multiple times in Executive Edge will host a two- hour session entitled "Key Insights for Modernizing Your Data Protection Infrastructure" designed to help you shield critical data from threats both known and unknown. Deepak will not only provide IBM’s perspective on this critically important topic but will invite clients, analysts and IBM Business Partners to the stage to join the discussion. In addition to wealth of thought leadership this session will unveil the latest enhancement to the Tivoli Storage portfolio: IBM Tivoli Storage Manager Operations Center that was previewed to rave reviews earlier this year at Pulse. More details on Executive Edge can be found by previewing the agenda.
Technical Edge: Technical Edge is a 4.5 day technical event for IT professionals and practitioners focused on sharpening expertise, discovering new innovations and learning industry best practices. Featuring over 350 sessions to choose from, you’ll hear from product and developments experts with deep technical expertise who will not only introduce new IBM offerings and updates, but will put them into action through hands-on labs and demonstrations that closely match real-world operating conditions. In particular, for those interested in Storage software there will be 35 Tivoli Storage-led sessions spanning 9 of the 16 technical tracks, with especially deep content in the business continuity and systems management tracks. For more specifics check out the Technical Edgeagenda.
MSP Summit: For Managed Service Providers (MSPs), the two-dayMSP Summitfocuses on topics that are unique to this community and is designed to help organizations accelerate service delivery to become next generation MSPs. Strategic discussions will include topics such as Cloud, Next Generation Systems & Storage, and Big Data. One particularly interesting business opportunity for MSPs today is Cloud-based Backup as a Service with two successful Tivoli Storage MSP’s (Cobalt-Iron and Front-Safe) slated to share their experiences.
Winning Edge: The tail end of Edge will include a three-day sales bootcamp catering to the needs of IBM Business Partner (BP) sales professionals. Everything BP’s need to know to be successful will be discussed such as the opportunities related to IBM’s Butterfly acquisition, Storage Virtualization supported by marketplace perspectives of an independent Storage consultant. This information won’t just be theoretical; it's all founded on the real-world, quantified results already being achieved by IBM Business Partners and customers around the world, also to be discussed.
While I’ve tried to highlight the four discrete events within the Edge event this really only scratches the surface of Edge. There’s all the other valuable aspects – the hours of networking opportunities, Executive 1x1’s and the Solution Expo Hall where you can connect with subject matter experts from over 50 sponsors just to name a few.
Clearly I can’t do the Edge conference justice in a single blog but hopefully this gives you a sense of what you can take advantage of and that there truly is something for everyone at Edge. To learn more please check out the conference website and I hope to see you in Vegas in June at Edge2013.
I’ve been on the topic of software-defined storage for three posts now – one with my perspective, one covering a multi-vendor round table at Storage Networking World, and now on an intriguing bit of research.
Earlier this year, IBM sponsored EMA research into Demystifying Cloud. The project was intended to collect lessons learned from organizations of all sizes that had completed at least the first stage of their initial private cloud deployment, and then use that data to provide guidance to organizations considering the purchase of cloud technologies. Along the way, EMA discovered what most folks would not have predicted -- the critical role of storage for companies of any size and vertical when planning and implementing a private cloud.
Check out my latest post in TheLine for the rest of the story.
I’m just returning from the SNW Spring conference in
Orlando. It seemed sparsely attended but my 5-foot tall wife of almost 28 years
has always told me that dynamite comes in small packages (I believe her!).
As I noted in my last post, I was in Orlando to participate in
a round table discussion on storage hypervisors hosted by ESG Senior Analyst Mark
Peters. I was joined by Claus Mikkelsen - Chief Scientist at Hitachi Data
Systems, Mark Davis – CEO of Virsto (now a VMware company), and George Teixeira
– CEO of DataCore. Conspicuously missing from the conversation both at this SNW
and at a similar round table held during the SNW Fall 2012 conference was any
representation from EMC. More on that in a moment.
The session this time drew a crowd roughly three times the
size of the Fall 2012 installment – a completely full room. And the level of
audience participation in questioning the panel members further demonstrated just
how much the industry conversation is accelerating. I was pleased to see that
most of the discussion was focused on use
cases for what was interchangeably referred to as storage virtualization, storage
hypervisors, and software-defined storage. Check out my new blog – TheLine – for a view on a few of the use cases that were probed on.
Great timing on this post, Ron. I was just reading an editorial by
Rich Castagna on searchstorage.com
pondering whether "software-defined storage" was just the latest
marketing hype and that vendors were somehow claiming that this new
capability would make storage hardware obsolete.
You make the point, very eloquently, that this is not hype, that it
has been deployed widely, and that it does not mean the end of
storage hardware -- it just shifts the quantity and the costs of
the storage hardware in the customer's favor, while also creating
new use cases that solve real concerns in the market.
Back at the Storage Networking
World Fall 2012 conference, I participated in a round table on storage
hypervisors hosted by ESG Senior Analyst Mark
Peters. I was joined by Claus Mikkelsen - Chief Scientist at Hitachi Data
Systems, Mark Davis – CEO of Virsto (now a VMware company), and George Teixeira
– CEO of DataCore. Following the conference, Mark Peters posted a very nice series
of three video blogs with perspective from the round table participants. They
are worth a listen.
The discussion is
continuing at SNW Spring 2013 at Rosen Shingle Creek in Orlando, Florida. The
panel discussion "Analyst Perspective: The Storage Hypervisor: Myth or
Reality?" will happen on Tuesday, April 2 at 5:00 pm EDT.
As we prepare for the round table next week, I thought it worthwhile
to offer a point of view on storage hypervisors. Check out my new blog – The Line –
for more information.
Royse Wells, Chief Storage Architect for International Paper discusses Tivoli Storage Manager Operations Center, previewed at Pulse 2013.TSM Operations Center is a new graphical interface that helps administrators and management get quick summary views about the backup environment and simplify administration.
Jeff Jones, UNUM
UNUM Uses Tivoli Storage Manager for Virtual Environments
Jeff Jones is senior infrastructure manager at UNUM, a leading provider of financial protection benefits in the US and UK.UNUM has about 85% virtual servers today.UNIM uses Tivoli Storage Manager for Virtual Environments to deliver faster backups and restores, and reduce the risk of data loss for 650 Windows and Linux VMs.
Klavs Kabell, IT-WIT
Modernizing Backup for Today’s Virtual Environments
Klavs Kabell is a Senior System Consultant at IT-WIT, an IBM Business Partner in Denmark specializing in backup solutions.Klavs discusses how backup solutions have evolved, as VM deployments have grown.Tivoli Storage Manager for Virtual Environments helps simplify VM backup administration and tracking, while incremental ‘forever’ technology improves storage efficiency.
Thomas Bak, Front-safe
Cloud backup and archive using TSM and Frontsafe Portal
Front-safe received the Best Cloud Solution award at the IBM Pulse 2013 conference, and the 2013 IBM Beacon Award for the Best Solution to Optimize the World’s Infrastructure.Learn about the value of enabling backup as a cloud service, using Front-safe Portal software.
Laura DuBois, Program VP of Storage for IDC, and Steve Steve Wojtowecz, IBM VP of Storage and Networking Software discuss client opportunities and requirements for storage clouds and compute clouds.Client cloud storage requirements include backup and archive clouds, file storage clouds, and storage that supports compute clouds.
Chris Dotson, IBM CIO Office
IBM’s storage transformation featuring SmartCloud Virtual Center
Chris Dotson works in IBM’s CIO Office as a Senior IT Architect for Services Transformation.He is guiding IBM’s own storage transformation.As a large enterprise, IBM manages over 100 petabytes of data, growing at 25% per year.Chris discusses block storage virtualization, automated block storage tiering, file cloud storage, and automated block storage management at IBM.He shows how SmartCloud Virtual Storage Center is reducing storage costs by 50% with no noticeable performance impact to users.
BJ Klingenberg, IBM Global Technology Services
IBM Global Technology Services Uses Tivoli Storage Productivity Center (TPC) to Manage Clients’ Storage Environments
BJ Klingenberg is a Distinguished Engineer and Enterprise Storage Management lead for IBM.BJ shares his experiences using IBM Tivoli Storage Productivity Center in IBM’s Service Provider environment. Service Provider environments are governed by Service Level Agreements, so managing capacity, performance and availability are essential capabilities. Storage efficiency is essential to remaining competitive.See how TPC helps IBM deliver outstanding customer service at competitive prices.
Jason Buffington, ESG Senior Analyst, and Tom Hughes, IBM Worldwide Storage Executive discuss business and technical challenges for data protection.Tom and Jason discuss new solutions and Best Practices for protecting data more efficiently and effectively for today’s cloud, mobile and virtual environments.
Colin Dawson, TSM Server Architect introduces Tivoli Storage Manager Operations Center, the next generation of backup administration from IBM. He describes how TSM Operations Center was designed and built, using extensive user feedback.
Jonathan Bryce, OpenStack Foundation founder and Todd Moore, IBM
OpenStack Provides Compute, Storage and Network Interoperability for Clouds
The OpenStack Foundation has gained 170 corporate and over 8,200 individual members since its inception in 2012, making it one of the fastest growing cloud standards.Jonathan Bryce, Executive Director and founder of the OpenStack Foundation, and Todd Moore, IBM Software Group Director of Interoperability and Partnerships discuss the capabilities and opportunities for building cloud solutions using OpenStack to manage compute, storage and network resources.
Deepak Advani, IBM
Optimizing IT Infrastructures for Today’s Workloads
Deepak Advani, General Manager of Tivoli Software discusses top issues and opportunities facing clients as they adopt new breeds of applications to engage with customers and improve operations using mobile devices, cloud and analytics.
Wow!What an exciting week it has
been at Pulse 2013!Especially, for Tivoli Storage!In addition to the inspiring words of Day 3’s
keynote speaker, Peyton Manning, I was also equally inspired by many of our
Tivoli Storage Business Partners like Cobalt Iron, Silverstring, Front-Safe and
STORServer, who led sessions on exciting topics like how to create a cloud
service in a TSM environment, to how to transform your data backup costs into
Lets start with the final day of Pulse General
Sessions which kicked off to a packed auditorium…..Jamie Thomas, IBM
Tivoli VP of Strategy and Development, took the stage first with a panel of IBM
Experts including CTOs Dave Lindquist, Jerry Cuomo, Sandy Bird,
and Sky Matthews.These key
technology leaders, with Jamie facilitating the discussion, led us through
their technology roadmaps around what’s new and what’s coming in Cloud,
Security Intelligence, the Mobile Enterprise and Smarter Physical
Infrastructure.Following this panel
discussion, Bruce Ross, General Manager for IBM GTS, helped explain how
his team is helping to enable the acceleration of cloud services.It was a great line-up of experts and many
shared examples of how our technologies are driving innovation.You canwatch the replay of the General Session here.
Next up on the main stage was Pulse
2013 Guest Speaker Peyton Manning of the Denver Broncos.Peyton gave a heartfelt speech on the “art
and science of decision-making.”Are you
making the right decisions to deliver innovation?Are you sticking to your decisions?...were
some of the key topics covered.Offering
his perspective on Leadership, I think my favorite all-time quote was
this:“You can either be a warrior or a
worrier.”So true.Decisions backed by facts and data analysis
helps you drive the best decisions, and technology has greatly impacted this
process, he pointed out.Scott Hebner,
WW Tivoli VP of Marketing, joined him onstage with some great Q&A, that
ended with Scott going long, “up and out,” to catch a bullet pass from
Peyton.And, yes, pass caught, no
Now, on to our Tivoli Storage
sessions today which featured many of our Business Partners.Thomas Bak, CSO & Partner of
Denmark-based Front-safe, kicked off the morning with a very interactive
discussion on the topic of how to create cloud service around your Tivoli
Storage Manager environment.Front-safe,
a recent 2013 Beacon Award winner, is bringing TSM into new markets via a
cloud-enabled portal.He mentioned 3,000
end customers are already using this solution for backup, and backing up
literally 10,000+ servers!Front-safe’s
new cloud backup service provider, i-Sanity, also addressed their model of
“backup as a service” and they are the first Front-safe backup service provider
in South Africa.You can learn more by watching this great
Silverstring Ltd, another Tivoli
Storage Business Partner, led a session with customer Rabobank International, a
large global financial institution with many dispersed TSM systems, who told us
all about best practices that have been used for daily TSM administration.Great content was also shared that focused on
how these best practices and cloud-based automation software can be combined to
actually lower the cost of delivering TSM services and improve service levels.
Later in the
day, Richard Spurlock of Cobalt Iron
held an engaging session on how to transform your backup costs into business
opportunities.Cobalt Iron combines TSM
backup with a cloud experience in a simple deployment model that’s all about
flexible deployment options.Richard
really helped the audience better grasp how the costs and complexity of
enterprise backup can really “bankrupt” its value – and how Cobalt Iron’s
solutions can leverage your backup investments into a flexible and high
value data protection solution.Earlier
in the week, Cobalt Iron had been honored as a recipient of the 2013 Tivoli
Business Partner Awards finalist. Congratulations!
In one of the final Pulse 2013
storage sessions of the day, Business Partner STORServer delivered a compelling
presentation on how to provide Backup-as-a-Service with their STORServer Backup
Appliance and TSM.This session was of
great help for both large enterprises who were looking for how best to charge
for backup services, and also MSPs looking for additional revenue streams.
We also heard from
customers like Nyherji who told the audience all about how they use FlashCopy
Manager with TSM Node Replication to increase service levels and obtain high
availability.I was especially interested to hear about all the stellar benefits that these Tivoli storage solutions brought to their
business – from absolutely 0 downtime to hugely improved backup and recovery
times.As a result, Petur Eythorsson of
Nyherji, told a great story of how they completely redesigned their TSM
environment and added TSM Node Replication and FlashCopy Manager to complete
I wrapped up my week at Pulse
2013 visiting with both old and new friends and colleagues later that evening,
continuing to recap what was my 3rd, and, I believe, BEST Pulse
ever!Congratulations and thank you to the IBM Tivoli Pulse team for a job well done.....I can’t imagine how the next Pulse
will trump this one, but, in true Tivoli
fashion, I am sure it will.
And, in case you are having Pulse 2013 withdrawals already, we’ve captured some engaging storage
videos this week that are already available to you now. I hope you can take a moment to relive some
of the great Tivoli Storage moments this week and listen to all the great things that analysts, thought leaders, our customers, and Business Partners are saying
about Tivoli Storage solutions:
And, in case you would like to hear more about what’s
new and cool coming from Tivoli Storage, you can always join us again in Vegas
this June for IBM Edge2013, which will
bring you more opportunities to connect with your colleagues and learn about
industry best practices for storage management, virtualization, and cloud technologies.
Following an outstanding PurePalooza party on Monday night that featured a 2-hour performance by 6-time Grammy Award winner Carrie Underwood, you might have expected Tuesday’s General Session to be a little quieter than usual. However, that wasn’t the case at all as the energetic vibe from today’s session picked up right where Monday left off -- helping to quickly shake off the effects of a wild Monday night for many.
This morning’s 90-minute general session was themed “Best Practices in Action” and featured a client panel of IT leaders from AT&T, Equifax, Carolinas Healthcare System and the Port of Cartagena sharing how they are converting opportunities from Cloud, Mobility and Smarter Physical Infrastructures into tangible business outcomes.
The Unified Recovery & Storage Management track picked up on the General session theme with Tuesday’s breakout sessions featuring no fewer than TEN Tivoli Storage clients sharing real-life examples of how they were applying IBM Tivoli Back-Up and Recovery and Storage Management solutions to address a host of complex challenges. While this represents a just a tiny sliver of the valuable content, some of the session take-aways included:
• Irfan Karachiwala (Ph. D), Manager, Enterprise Data Strategy at Kindred Healthcare, a post-acute healthcare provider with over 450 locations in the U.S. has realized improvements in Recovery Point and Recovery Time Objectives by switching from data only backups to VM-based image back-ups using Tivoli Storage Manager for Virtual Environments;
• Redbook author Gerd Becker of Empalis Consulting, a German-based IBM Premier Business Partner recommended the use of TSM Fastback for Workstations to provide continuous protection and meet shorter Recovery Point Objectives (by the way you can try TSM Fastback for Workstations for free through the currently available trial);
• BJ Klingenberg, Distinguished Engineer in IBM Global Technology Services that uses Tivoli Storage Productivity Center to manage the storage environments of over 1000 accounts and 400 petabytes of data suggested taking 12-hour Storage environment snapshots to facilitate problem isolation and determination as part of a sound change and configuration management strategy;
• Petur Eythorsson of Nyherji, a Managed Service Provider in Iceland that provides management for 50 TSM servers mainly supporting mid-sized Windows-based environments confirmed that like many others, his client base has little patience for any recovery time beyond 2 hours;
• Huey Cantrell of Blue Cross Blue Shield of Louisiana reminded us that the overwhelming number of restore requests are for data that was recently backed-up so his IT organization spends it’s time and energy focused on the ability to quickly recover data created in the past few days;
• Eduardo Zalamena of Mitsubishi Motors of Brasil pointed out that within large organizations you can’t treat all data the same way. For example, a 2-hour restore time within some systems can be a catastrophic while for others it may be very appropriate. Eduardo recommended the implementation of system-specific recovery objectives to cost-effectively address requirements;
• John Clarke from United Healthcare that protects over 70 million Americans, has altered his teams approach back-up and recovery focus – emphasizing the restore over back-up primarily because of the emergence of Big Data systems such as Netezza and Teradata.
On a day that put IBM clients “front and center”, it was only fitting to close Tuesday with the Tivoli Storage Birds of Feather meeting. This two hour, highly-interactive discussion gave clients the opportunity to get all their questions answered and provide direct feedback to Tivoli Storage Executives, Developers and Product Managers.
Based on the buzz around the Storage breakouts it was clear that the client focus on Tuesday was a hit so a huge thank-you to all the clients who took the time to prepare and share their stories at Pulse2013. Pulse wouldn't be a reality without your contributions!!!
As another Pulse begins to wind down, it’s time to start thinking about IBM Edge2013 in June. The Edge conference will bring us back to Las Vegas to hear more clients describe how they are Optimizing Storage and IT. If you weren’t able to join the 8000 of us at Pulse2013, start making plans to attend Edge by finding out everything you need to know (including the early-bird discount available through the end of April) at the IBM Edge2013 Conference website.
Day 1 at Pulse 2013 was grand and exciting! I am not in Vegas and how do I know? No points for guessing, all thanks to
social media #IBMPulse and my friends in Vegas who are tweeting away every
moment of the event.
Today is for IBM’s business partners! Deepak Advani, Tivoli
General Manger reemphasized on the role of Business partners and how critical
it is for IT innovations to achieve business results.Todd has very well captured the essence
of this event in his Pulse
Day 1 was also the day of awards!! Congratulations are in
!IBM Tivoli Award for Best Data Management Center Solution
was picked up by CobaltIron. CobaltIron will cover their solution during their
Wednesday session, including how to transform data backup costs into a business
opportunity – a must attend session! (Watch out for session# 1914, Room# 114,
2:00pm on Mar 6)
Front-Safe got the IBM Tivoli Award for the Best Cloud Solutions.
Watch out for their session on March 6 (session# 2436, room# 114) highlighting how
to create a Cloud Service around IBM Tivoli Storage Manager.
Then came the real example of how we turn opportunities to positive
outcomes, Chris Gartner! We talk about technologies all day, but the one that
created ripples in twitter.com was the “Pursuit for Happiness”. Thanks Todd for
blogging from Pulse.
Day 1 evening was reserved for Birthday Celebrations! Yes, IBM
Tivoli Storage Manager Birthday turns 20!! The celebration marks two decades of
back-up and recovery management leadership. Solution expo hall was abuzz with storage
enthusiasts! Thanks to Dave Russell of Gartner Inc for having joined TSM's 20th
birthday celebrations!! Needless to say, what a great way to network with
Storage experts from around the world!
And what a way to end the Day 1, but a musical extravaganza!
Bella Electric Strings performed at the Expo opening.
Thanks folks for bringing Pulse to people who are tracking and trending the world over. What I may not know is how much you all won in the blackjack J..Have a good one!
seen tremendous growth in recent years due to the size, type, and complexity of
data that is being created. Multimedia files, efforts to go “Green”, and the
general desire to be more collaborative in our daily work with all of our data
on hand as needed has increased the demand for storage. Analysts state that
half of all data in existence today was created within the last five years.What does that say about what the next five
digital world, this growth is driving the need for new thoughts around the cost
and complexity of managing storage, a desire for hyper efficiency to get the
absolute most out of storage resources is now the norm.This
requires new inventions in storage techniques and data analytics. You have
likely heard of cloud. In fact, you probably back up some portion of your data
to a cloud service today, either from your personal computer, tablet, or smart
phone. These services enable self-service delivery of storage and are built on
an integrated, optimized infrastructure. Services like these are a model for
doing business that can be translated into datacenters all over the world.
The IBM SmartCloud
Virtual Storage Center is the cornerstone of our storage cloud services because
it enables users to easily migrate to an agile, cloud-based storage
environment—and then to manage it effectively. VSC delivers unique capabilities for our customers including
automated provisioning of virtual storage resources pooled across heterogeneous
storage platforms, built-in efficiency features that remove traditional
barriers to increase the utilization of your existing storage assets, and
non-disruptive data mobility so that your data is available no matter where you
are. Storage can be provisioned directly
from a catalog in a self-service fashion and then placed on the appropriate
class of storage based on the required characteristics of the workload, with
integrated data protection and resiliency to match service level requirements.
As workload characteristics change over time, VSC can help reduce costs by
migrating less critical data to less expensive media.
United Kingdom-based investment and insurance company recently implemented IBM
SmartCloud Virtual Storage Center to build a more flexible, automated approach
to managing data that they had not previously been able to. Their IT department
is now able to offer an improved service to all lines of business at a far
lower cost. They have been able to reduce the amount of physical disk drives
they will need to invest in moving forward, while improving performance, which has
allowed them to redistribute their IT budget into other key areas.
Here in the
United States, one of our major telecom providers has been leveraging Virtual
Storage Center to increase the utilization of their storage assets to over 80%,
while reducing their reliance on tier 1 storage to greatly improve their total
cost of ownership. In addition, they have been able to automate many of the manual
tasks associated with managing their enterprise data center, freeing IT
resources to focus on strategic initiatives.
SmartCloud Virtual Storage Center has experienced a meteoric rise within our
storage software portfolio, with over 60 new customers in just one quarter of
general availability. This integrated approach to dealing with the major
obstacles in enterprise data centers has been well received by analysts and customers
alike. Leveraging key capabilities from our existing storage software
portfolio, built on years of experience and leadership, IBM SmartCloud Virtual
Storage Center has helped customers address the demands of rapid storage growth
and provide built in resiliency to ensure 24x7 operations.
information on IBM SmartCloud Virtual Storage Center and Pulse 2013, please
check out Tivoli Storage at Pulse by clicking here.
Data protection matters! Actually it matters even more with the advent of big data. The unique challenges of managing & protecting big data has forced IT professionals to relook at their data backup & protection policies. Every year ESG conducts a forward looking spending intention survey. They shared a couple of interesting facts that do not surprise but definitely reinstate my thoughts. When organizations were asked what they would consider most important IT priorities over the next 16-18 months, 30 percent responded back saying “improved data backup & recovery”!
And when they were asked what they would characterize as challenges with their organizations’ current data protection processes and technologies, “cost” & “need to reduce back up time” came out to be the major concerns.
ESG analysts Mark Peters and Tony Palmer shared these insights as they took us through the results of their lab testing on Tivoli Storage Manager. If you are not familiar with IBM Tivoli Storage Manager (TSM), it is a scalable client/server software primarily designed for centralized, automated data protection. The goal of the ESG report is to educate IT professionals and provide insight into the advanced data backup technologies such as forever incremental back up, deduplication and why it is so important in current scenario. Click here for the ESG video.
The TSM Lab validation was performed using a combination of hands on testing, audits of IBM customers in live production environments and detailed discussion with IBM experts. The objective is to validate some of the valuable features and functions of the product and show how those can be used to solve real customer problems, and identify any area of improvement. IBM has continuously invested in TSM platform bringing innovation to data protection and recovery. ESG evaluates how the newer versions of TSM provide a turnkey solution to a range of data protection issues. They found that the two technologies (deduplication and progressive incremental backups) working in tandem were able to achieve 90 percent data reduction after just six incremental backups and 95 percent data reduction after ten backups. Replication function is also fully integrated with deduplication, thus optimizing quicker recovery during disasters. TSM uses policy-based automation along with intelligent move-and-store techniques, helping to reduce data administration efforts. Over all, ESG’s validation rightfully points to the key enhancements to the TSM platform that drive greater scalability, efficiency, and data availability. Please register and download the detail 23 page ESG Lab Validation Report here. Opinions are my own
If you are following the developments related to Pulse 2013 you’re likely well aware that Peyton Manning has been announced as the keynote speaker and the evening entertainment at Pulse Palooza will feature Carrie Underwood. If you’ve been to Pulse before you also know you can expect compelling thought leadership in the General Sessions and deep content in over 300 breakout sessions to choose from.
Over and above all that exciting news there’s one thing that keeps attendees coming back year after year – the opportunity to network. Each year following Pulse, attendees tell us through the post-Pulse survey that networking with the over 8000 conference attendees rises to the top as the most valuable aspect of the event.
The opportunity to network with your Storage colleagues at Pulse 2013 will once again be front and center at the conference. Formal opportunities exist such as the Storage Birds of a Feather session, Meet the Experts in Storage, Client Connections along with access to Storage subject matter experts from development and product management in the Expo Hall. And of course in Las Vegas there’ll also be plenty of informal gatherings to connect with Storage professionals to share knowledge and expertise.
A great way to start the networking process is to take in the numerous client-led sessions in the Unified Recovery and Storage Management track within the Cloud and IT Optimization stream at Pulse 2013. Following the track-kick off which features Dave Russell, Research Vice President at Gartner you’ll have the opportunity to hear IBM clients sharing their experiences, some highlights of which include:
• Learning about the experiences of Chesapeake Energy with the new TSM Backup and Recovery Dashboard based on their participation in the Early Adopters Program;
• Understanding how The University of Sydney is using SmartCloud Virtual Storage Center to provide centralized management of its diverse storage environment;
• Hearing how Banco de Brasil improved its backup capabilities by taking advantage of the latest advancements in Tivoli Storage Manager;
• A panel of experts from Blue Cross Blue Shield of Louisiana, Kindred Healthcare and Centene Health discussing how they are protecting healthcare data with IBM storage solutions.
While this is just a tiny sampling of the type of organizations that will take to the podium in the Storage track at Pulse there’s a wealth of experience to help you tackle your most pressing Storage Management challenges. Taking in the sessions is only the beginning – connecting with these storage professionals in the numerous networking opportunities at Pulse is how the conference truly comes to life.
If you’re already registered for Pulse start you can start networking now by connecting with the growing list of speakers and other conference attendees on the Pulse2013 Vivastream site and if not visit the PULSE 2013 home page for all the conference details and to register today.
Please plan to join IBM and thousands of your peers at the MGM Grand Hotel in Las Vegas, March 3 to 6, 2013.
PULSE 2013 is IBM’s premier event focused on business transformation and IT optimization, helping clients learn how to turn opportunities into outcomes.
As the planet becomes smarter, it becomes clear that a solid, robust, scalable and cost-effective IT infrastructure is required to create, store and manage all the information at the heart of these new opportunities.
Unified Recovery and Storage Management is the cornerstone track within the Cloud and IT Optimization stream at PULSE 2013. We are putting together a very excited agenda, and I’d like to give you a preview of what you can learn from your peers, thought leaders, and yes, a few IBMers, by attending this track.
We kick off the track on Monday with a keynote presentation by Dave Russell, Research Vice President at Gartner. Dave will describe the trends that his team is seeing, and encourage you to take a position on transforming and optimizing your data management infrastructure.
During the 3 days of breakout sessions, you will learn how many of our customers have started on this journey, including best practices and outcomes. Our speakers include subject matter experts from:
• Two major banks • Two universities • Five healthcare organizations • Several consumer and industrial companies • Five managed service providers • A leader in media and entertainment
Some of the top-of-mind topics that will be covered include management and protection of virtualized server and storage environments; advancements in disaster recovery and business continuity; storage in the Cloud, storage as the Cloud, and storage to the Cloud; backup consolidation and simplification; and how to easily cost-justify an efficiency improvement project to your management.
You can also learn how IBM “eats its own cooking” as the IBM Office of the CIO describes its use of IBM storage management software to drive costs out of our business while meeting the computing demands of a company the size of IBM.
You will also have the opportunity to learn about new products and enhancements – we can’t tell you what they are yet, but we’re pretty excited.
You can see who our expert speakers are, what they’ll be speaking about, and start to build your experience at Pulse this year by visiting the Pulse SmartSite and Agenda Builder at: http://www.pulsesmartsite.com/
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
We’re getting deep into the planning for our 6th annual PULSE conference (ibm.com/pulse), and I’m getting very excited about the storage content that is being assembled. Again, it will be at the MGM Grand Hotel in Las Vegas, March 3 – 6, 2013.
At our Storage Track Kickoff session, we’ll have some new things to announce and highlight, and we’re close to announcing an exciting keynote speaker.
Following the track kickoff, we’ll have 20 breakout sessions on data protection and storage management topics, covering advances in virtual machine protection, disaster recovery, cloud integration, and a lot more. We’re mixing it up a lot more this year to ensure you get a range of perspectives. We’ll have 21 client speakers discussing their experiences and best practices; plus 8 business and technology partners providing insights into added value approaches to storage management who will be complemented by IBMers sharing the new stuff we’ve been working on.
Among the client speakers will be storage professionals from across the globe representing major banking, healthcare, media, industrial and university organizations. There will also be sessions on a variety of cloud topics, including private cloud storage and backup-as-a-service opportunities.
To follow on a theme mentioned by Steve Mills in his keynote at PULSE 2012, we’ll show how IBM “eats its own cooking”, sharing how IBM’s Office of the CIO transformed its massive storage infrastructure; and how IBM’s Strategic Outsourcing services organization is leveraging our products to more effectively manage their clients’ storage environments.
There will be many cool things to see in the expo center again this year, including offerings from many of our ecosystem partners, and you can roll up your sleeves in the hands-on labs and product training and certification areas.
Have you heard about this year’s PULSE PALOOZA entertainment? We rocked the Grand Garden Arena with Maroon5 in 2012, and will follow that with Carrie Underwood in 2013.
Now’s the time to act. Early bird registration, which saves client attendees $500 off the conference fee, closes December 31st. Go to http://ibm.co/pulseregister and get ready for an outstanding event. I look forward to seeing you there.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Server virtualization and storage virtualization go hand in hand. Centralized, virtualized storage is crucial for advanced server virtualization to be flexible and easy to manage.Companies are realizing that to unleash the real potential of cloud agile infrastructure, storage virtualization has to become mainstream like server virtualization.
For many companies, there is a constant need for additional storage resources to supportgrowing volumes of information.And if you don’t focus on managing your storage infrastructure, you can find that one virtual server is running out of storage capacity, even while there is ample capacity in other parts of the network.
With storage virtualization, companies now can make better use of existing investments in disk capacity and can often postpone the need to purchase additional capacity. As storage becomes virtualized, it becomes easier to manage and helps companies to adapt to business needs much faster.And not to forget, it actually costs less!
For the past 4+ years at IBM, I was the worldwide product marketing manager for the IBM Tivoli Storage Manager (TSM) family of data protection and recovery solutions. But as of a month ago, I am now in a brand new role, Customer Experience Product Manager across the Tivoli Storage software group.
I’ve been asked to bring together various IBM business functions to help identify areas for customer experience improvement, measure success, and generally be the champion for all things that will help address a customer’s experience with our products.
In addition to reading and learning and reaching out to like-minded people, my initial efforts have been to assess where we are and the progress we’ve made to date. For example, we’ve made tremendous progress within the TSM family over the past 4 years starting with the release of TSM v6.1 – adding in valuable features such as deduplication, replication, monitoring and reporting, while also streamlining and unifying the user interface. We also introduced a new pricing model that makes it much easier to acquire, manage and forecast your backup software licenses.
The next release of the TSM family will be announced in early Q4-2012, and it will include some exciting enhancements to improve the user experience in TSM, TSM for Virtual Environments and Tivoli Storage FlashCopy Manager. And next year we’ll be rolling out a brand new user interface that promises to make the day-to-day administration of TSM much, much easier.
We’ll also be rolling out the IBM SmartCloud Virtual Storage Center, our storage hypervisor that was pre-announced at PULSE in March, and it promises to do for storage what VMware and others have done for servers: improve utilization, simplify management, and reduce costs. There are many upcoming events in October that will feature these Tivoli Storage product announcements, and we hope you can join us to hear more details. They are:
Your experience with our products is only one pillar in the building that we’re trying to construct. How does it tie in with your interactions with our marketing and sales teams? Does our story match our solution capabilities? Are you getting what you were promised?
Does the product documentation provide the answers to your questions, or help to avoid needing to ask the questions at all?
And what about support? A recent survey by STORAGE Magazine of enterprise backup software users rated IBM #1 in the category of support. But what can we be doing better, or differently, to keep you delighted in your relationship with IBM?
At the end of the day, I’m looking at this role in 2 ways: - What can we fix or improve? - How can we ensure that everything new that we do is viewed through a customer experience lens?
To be successful, I’m going to need the help of a lot of people who have a vested interest in our success – across all facets of our business, but especially from our customers and partners. Please reach out to me at email@example.com with any ideas or comments.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Recently, I had the distinct pleasure to deliver
a presentation on Data Storage and Compliance at the IBM Tivoli event 'Business
Without Limits 2012' in Bangalore,
India.There are more than 100 attendees who
attended the event from almost every industry.
My Track for the day: Addressing Data Growth, Threats and Compliance; Unified
The volume, velocity and importance of data have
increased dramatically during the past few years to the point where most backup
and archiving solutions can't keep up with the scalability, functionality,
performance, reliability and budget realities of today and tomorrow. Attendees
understood how to reduce backup data capacity by as much as 95%; how to reduce
the amount of new data at risk by 90% or more; and how to simplify global data
recovery operations and achieve compliance by leveraging a unified management
I was privileged to present in such an interactive
session, where customers understood how our broad product portfolio will help
in addressing their business challenges.
IBM now brings ‘Business Without Limits 2012’ to several cities across the United States
in October and November.This is an
exclusive IBM Tivoli event designed to increase awareness and thought
leadership among the IT managers, Infrastructure leaders, Systems
Administrators, Storage Managers, and Data Center Managers. IBM’s Business
Without Limits event is coming soon to the following cities:
The event will focus on how IBM’s Integrated Service Management strategy
brings together the different capabilities to enable integrated delivery of
business services across complex, interconnected physical and digital
IBM’s Business Without Limits event will have the following Storage Tracks:
the pivotal role of storage in modern data center
backup and unifying recovery
your data protection headaches to the cloud
storage analytic and reporting
This conference will explore how you can capitalize on the opportunities of
a smarter planet and remove the barriers to innovation that will help you
achieve “Business without Limits.”As
today’s leaders are transitioning to smarter, flexible cloud infrastructures
that speed the delivery of innovative products and services, effective storage
management becomes a critical component of that success.Please join us at this event to learn more!
As an IBM marketing manager, my job includes writing about storage technology.This post is about more than technology, though.It’s about a new breakthrough capability for managing storage costs and service levels.
I recently met with IBM Distinguished Engineer, Mike Sylvia, who has been working on a Business Transformation project to enable automated right tiering for storage in IBM data centers.Right tiering is the notion that data should be hosted on the optimal storage tier to balance cost and performance requirements.
Mike explained that applications tend to be hosted on top tier storage.When he analyzed actual usage patterns, Mike found most data can be effectively hosted on lower cost storage.Mike’s project put numbers to a problem that is often hidden from view and, until now, nearly impossible to solve.
Hosting data on the wrong storage tier turns out to be a huge efficiency problem.Mike predicts IBM will save $13 million over 3 years in one data center, by periodically moving data to the right tier.During the pilot, users saw their cost for storage drop by 50% per TB on average.This is big.
Like many advancements, IBM’s automated right tiering capability is accomplished by integrating existing technology.Mike Sylvia’s project combines storage virtualization, storage management automation and analytics.Today, IBM offers the technology in a bundled solution called SmartCloud Virtual Storage Center.
How does it work?
Step 1.IBM’s storage virtualization controller collects detailed usage metrics about storage it manages throughout the data center, without impacting application performance.
Step 2.IBM’s Storage Analytics Engine studies usage patterns over time to understand performance requirements.
Step 3:Storage tier recommendations are generated in reports that can be shared with application owners and IT management.
Step 4:Storage virtualization enables online data migration, with no disruption to applications or users.
Repeat:Usage patterns change over time, of course, so right tiering becomes an ongoing process.
Why does it work?
Automated right tiering delivers the efficiency benefits of Information Lifecycle Management without the headaches and hidden costs.Automated right tiering has significant benefits for both data owners and IT leaders, so everyone wins.
For example, application and database owners can gain the following benefits:
Applications can move to top tier storage when they need it, without waiting for a maintenance window.
Average storage costs drop significantly, without a drop in services.
IT leaders benefit, too.For example:
Storage tier decisions are based on analysis of actual usage patterns, not predictions.Storage performance management tasks are eliminated.
Data can quickly and easily be moved back to its original storage tier if requested, without incurring an outage.
IBM automated right tiering works with most storage systems, so deployment is nondisruptive.
The technology that enables automated right tiering has significant additional benefits, such as the ability to eliminate scheduled outages for storage system maintenance.
Problem solved.How has your organization addressed the storage right tiering challenge?
IDC has recently released its Worldwide Storage Software QView for the first quarter of 2012. In it, IDC estimates that the total Storage Software market for 1Q12 grew about 3.3% over 1Q11.IBM had a solid quarter while Symantec faltered, allowing IBM to take the overall #2 share rank position for 1Q12.
§In the Overall Storage Software Market, IBM moved up to #2 share rank position in 1Q12, gaining 2.0 share points over 1Q11.
§IBM retained its #1 position for Archiving Software growing faster than the market.HP holds #2 spot with its 2011 acquisition of Autonomy.
IBM offers a comprehensive, flexible storage management software portfolio that helps organizations address storage management challenges across the enterprise, including data centers, remote/branch offices and desktop/laptop computers. Learn more about the specific components within the IBM storage software family that can help you create a more responsive and resilient storage infrastructure for your on demand business.
In a previous post, we talked about the recent reviews that the Tivoli Storage Productivity Center (TPC) received. In particular, we are very pleased with the 'Leader' designation received in the recent Gartner Magic Quadrant review for Storage Resource Management.
Its not just the analyst reactions that are positive. Based upon a customer focused feature list, the product team undertook an overhaul of the Graphical User Interface (GUI) and introduced a dashboard that provideseasy touse comprehensive reporting. To ensure they had got it right, the proposed changes were demonstrated on the Expo floor at the Pulse 2012 conference in Las Vegas earlier this year. Responses from the user base were enthusiastic, to the extent that this next iteration is quickly becoming a sought after item.
A beta test program was initiated at the conference, as the true litmus test that the proposed new features would stand the test in a true production environment. Early responses point to some interesting observations. When polled about their experiences with the next evolution of the product, one of the most talked about aspects were features provided to simplify complex reporting. Beta testers derived great time and productivity benefits from having a picture of the full storage environment; something they had to previously go to multiple places together. A common benefit registeredwas time savings when it came to complex reporting.
What is compelling however is the business analytics that this next iteration yields. Tivoli Storage Productivity Center (TPC) provides detailed topology viewsof the entire storage infrastructure. In the overhauled GUI, administrators can observe the overall health of the environment instantly. A simple 'right click' provides detailedviews on each of the storage network entities. The facilitation of these environment wideviews led a beta customer to observe that 'more than just the storage engineer can now get a simple view of their SAN environment'. What does this mean? It means that what started out as a time saver for the practitioners - the storage engineers - now becomes an entryway for the management team to get a quick look at the overall environment, allowing for higher level strategic discussions about storage environments and needs.
Is this good or bad? A recent survey revealed that CMOs will outspend CIOs on IT by 2017. When I tweeted this I was asked by @jamie_joyce why it would take this long. My answer is that its likely due to the classic tension between a cost saving position on infrastructure vs a growth position on Business Analytics or feature offerings. When you think about Big Data within Business Analytics and the proliferation of mobile devices as two huge growth areas, the commonality is a mass proliferation of data in orders of magnitude never imagined. The conversation comes back to storage,and the associated resource management.
Which way does your company lean? Where is your head in that tension between cost savings and growth when it comes to your storage environment?
I chatted with Product Marketing Manager Amalore Jude about this and the kind of reaction the team got at Pulse in Vegas March of this year when they demo'd thenew GUI interface. He was quite pleased with the response. 'Customers were very excited looking at the new, next-generation interface' he told me 'many are awaiting June 4, when they can actually lay their hands on it.'
Well, June 4 is around the corner. If you are a regular reader of this blog, its quite likely I will meet you at the Edge Conference in Florida next week. If you're there, please tweet me @brenny or find me somehow and say hi.
The conference is selling out but there are still passes available for the Tech Edge portion of the four part event. It's not too late to register. The Tech Edge portion is well laid out with over 250 sessions that are being led by IBMers and customers. Sometimes its better to hear the war stories of your peers when you're trying to figure out how to exploit what you have, or are considering getting.
One customer who is speaking is Gary Fry of Unum. His session on March 6, 10-11am in Rm 115 is on Unum's use of the SAN Volume Controller and his experiences beta testing the new evolution of TPC.
So, if you are going, then I hope to see you out there. If you haven't yet decided, then getting a first look at this next evolution of storage infrastructure management is hopefully good motivation to consider it.
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs
nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data
created and used has been in the form of digital data. Today, the world
produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable
than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This
presents a real concern of a “Digital Dark Age” where digital storage
techniques and formats created today may not be viable in the future as the
technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A
storage tool that was so ubiquitous people still click on this enduring icon to
“save” their digital work and any word, presentation or spreadsheet
documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly
denser than they are today. While new form factors such as solid state disks
will help us provide more stable longer-term preservation of data, and the
promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard
drives and solid-state memory to overcome challenges of growing memory demand
and shrinking devices. Called Racetrack memory, this breakthrough could lead to
a new type of data-centric computing that allows massive amounts of stored
information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical
limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide
structure in midst of the data deluge
Now that we have the capability to preserve our digital
universe, we need to find a way to make it useful. We need to take the next
step past data preservation to data curation. Data curation is the active and ongoing management of data
through its lifecycle. This smarter data categorization adds value to data that
will help glean new opportunities, improve the sharing of information and
preserve data for later re-use. Social media is a great example of the power of curated
data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives
and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting,
appraising and organizing data to make them accessible and interpretable. The
key is bringing data sets together, organizing them and linking them to related
documents and tools. If data can be stored in a way that provides context,
organizations can find new and useful ways to use that data.
3) Storage analytics will open
new business insights
With data curation allowing organizations the platform to
better utilize their data, analytics will help turn that data into intelligence
and, ultimately, knowledge. With the information that historical trending analytics
and infrastructure analytics provides, you can index and search in a more
intelligent way than ever before. By doing analytics on stored data, in backup
and archive, you can draw business insight from that data, no matter where it
exists. The application of IBM Watson technology for healthcare
provides a good example. Watson collects data from many sources and is able to
analyze the meaning and context. By processing vast amounts of information and
using analytics, it can suggest options targeted to a patient's circumstances,
can assist decision makers, such as physicians and nurses, in identifying the
most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we
can learn more with the information we have today to improve service to
customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity
– new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain
industries are able to reach new levels of innovation by having the capacity to
house, organize and instantaneously access information. For example, Hollywood is known for its big budget
blockbusters, but it’s the big storage demands required by new formats such as
digital, CGI, 3D and high definition that’s impacting not just the bottom line,
but studios’ ability to produce these types of movies. Data sets for movies
have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs
as just one day’s worth of filming can generate hundreds of terabytes of data.
The popularity of these high data-generating formats means studios are looking
for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger
data dilemma than the entertainment business. Take a look at the Institute
University of Leipzig, in Germany, which has a major genetic study called LIFE
to examine disease in populations. LIFE is cataloging genetic profiles of
several thousand patients to pinpoint gene mutations and specific proteins.
This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of
data per year. Those figures will only grow with higher-resolution medical
imaging, and new tools or services such as making electronic healthcare records
5) Intervention...The Data
In this era of Big Data, more is always better, right? Not
so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too
much time and money collecting useless or bad data, potentially leading to
misguided business decisions. This practice can be changed with simple policy
decisions and implementing existing capabilities in technologies that exist in
smarter storage, but companies are hesitant to delete any data (and many times
duplicate data) due to the fear of needing specific data down the line for
business analytics or compliance purposes. Part of the solution starts with eliminating the copies.
Nearly 75% of the data that exists today is a copy (IDC). By deleting and
disabling redundant information, organizations are investing in data quality
and availability for content that matters to the business. Consider the effect
of unneeded data, costing money by replicating throughout an organization’s information
systems. This outdated data can also potentially be accessed for fraud.
the quality of data is not costly—not getting it right is.
ARE YOU SPEAKING AT PULSE? IF SO, READ ON PLEASE...and book your room at the MGM Grand today to avoid a price increase!
1. Have you uploaded your presentation? The deadline to upload presentations was January 20th to enable appropriate reviews and posting to the Pulse 2012 SmartSite Agenda Builder. Your presentation will be converted to PDF and can be downloaded or printed in advance by attendees, pending your approval. For a full list of presentation guidelines and processes please review the Presentation tab on the online Speaker Kit.
2. Do you know what audio visual equipment will be available in your session room? Click the A/V tab in your online Speaker Kit to review this important information.
3. Are you connected? Follow the conference news & highlights on Twitter or the Pulse blog. Click the Speaker Kit tab to find links and hashtags for use with social media. Find Pulse attendees using the Pulse SmartSite agenda builder.
4. Attendees are always interested in getting to know their speaker! Do you have a bio? Review and update your brief bio by logging onto the Speaker Kit website.
5. Have you started to build your Pulse conference agenda on SmartSite, the attendee conference portal? You will need your conference registration confirmation number to login to this site. Click the Build My Agenda icon to view scheduled sessions.
6. Have you registered for the conference and booked your hotel? Review the registration instructions listed in the registration tab on the speaker kit website.
Very important... Conference hotel accommodations are limited and available on a first-come, first served-basis. Conference rates are valid until January 27, 2012 or until the room block is sold out, whichever comes first.
Please take a few minutes to review the information in your online Speaker Kit, and follow-up on all speaker actions as needed.
If you have any questions or need additional information, please contact the speaker support at PulseSpeaker@experient-inc.com. We look forward to seeing you at the MGM Grand in Las Vegas March 4-7!
IBM has detailed innovative projects and research that show new
storage approaches to support Big Data growth and drive business innovation.
Healthcare, financial services, media and entertainment, and
scientific research among many industries face the challenge of storing and
managing the proliferation of data to extract critical business value. As
storage needs rise dramatically, storage budgets lag, requiring new innovation
and approaches around storing, managing, and protecting Big Data, cloud data,
virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute
for Computational Cosmology (ICC) at Durham University in the U.K. and Business
Partner OCF to build a storage system to better store and manipulate Big Data
for its cosmology research on galaxies. ICC is adopting the same IBM General
Parallel File Systemtechnology used in the
IBM Watson system to store and manage more than one petabyte of data from two
significant projects on galaxy formation and the fate of gas outside of
galaxies. The enhanced storage system will enable up to 50 researchers, working
collaboratively to access and review data simultaneously. It will also help ICC
learn to manage data better, storing only essential data and storing it in the
New Storage Platform Delivers More Personalized, Visual
Healthcare: A medical archiving
solution from IBM Business Partners Avnet Technology Solutions and TeraMedica,
Inc. powered by IBM systems, storage and software gives patients and caregivers
instant access to critical medical data at the point-of-care. Developed in
collaboration with IBM, the medical information management offering can manage
up to 10 million medical images, helping health care practitioners provide
better patient care with greater efficiency and at reduced costs. The
integrated platform allows users to manage and view clinical images originating
from different treatments and providers to bring secure, consistent image
management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a
general medical and surgical hospital in Visalia, Calif., needed to reduce its
operational costs while increasing storage space. To meet these demands, KDHCD
tapped IBM's storage systems to create a new storage platform that reallocates
resources and saves a significant amount of data space with thin-provisioning
technology. Virtualization creates a smaller hardware footprint so the hospital
also saved on power and cooling costs. KDHCD now has a consolidated storage
environment that provides the scalability, ease-of-management, and security to
support critical healthcare data management for the hospital.
IBM is looking for customers and business partners who are interested in participating in an Early Access Program (EAP)/Beta Program for an upcoming release of FlashCopy Manager, Data Protection for SQL, and Data Protection for Exchange. If you would like to nominate your organization to participate in this EAP/Beta, please send an email to:
Mary Anne Filosa (firstname.lastname@example.org)
and be sure to include your organization's name. Once your email is received you will be sent instructions on signing off on the EAP/Beta legal form online and when that signoff has been completed, you will be sent a link to the program's nomination site. We encourage you to respond quickly if you are interested as the program begins in mid December.
Live Webcast: Using Tivoli Storage Productivity Center to be the "eyes" into your SAN environment, and to see how that environment is changing. LIVE!
In the ever changing SAN environment, Tivoli Storage Productivity Center has many components to help the Storage Administrator know when a where to focus their attention. We will walk through many of these in a live demo and see how they can be used.
Let TPC help you keep up with storage growth instead of working longer hours!
Scott McPeek, IBM Program Director, Storage Sales Enablement. He has worked in the software industry more than 30 years, the last ten years have been with IBM as part of the TrelliSoft SRM acquisition. Scott now focuses on storage resource management, storage performance management and virtualization with products like TPC, SVC and the Storwize V7000.
How are you spending your time this weekend? Polishing up your Pulse 2012 storage session abstract, hopefully! With only 4 days left to submit a 100-word abstract by Nov. 7, we thought it would be helpful to share some final pointers. Keep in mind that this year's theme
is Business Without Limits and we are seeking to understand how you
gained visibility, control and automation to deliver better business
What are the key benefits to you as a Speaker? One full Pulse conference pass ($1995 value) and the opportunity to gain visibility for your company, and take advantage of an incredible networking opportunity with over 7,000 industry experts, press, and analysts.
Here's some pointers on how to get your Storage Management session abstract accepted: 1. Focus it on topics such as how you used Tivoli Storage Manager to manage "big data"; success with recent upgrades; or cloud storage 2. Tell us about the key business challenges you were trying to solve, and how IBM Tivoli storage solutions helped you address these challenges 3. What was the ROI, or key results, from implementing a Tivoli storage solution, and what valuable lessons did you learn from the experience
Don't forget to register during early bird registration by December 16 if you do not plan to speak at Pulse and attend the conference
complimentary. Early Bird registration can save you up to $700 off
registering onsite! See you at Pulse 2012!
Well it's that time again, hard to believe, I know...PULSE call for papers has opened, and we want to have another banner year in the Tivoli Storage Sessions! Last year we were standing room only in many of our sessions and this year we hope to fill each room once again.
As for topic suggestions, we'd like to hear from customers who:
Use TSM to manage 'big data'
Have best practices, created with our Tivoli Storage portfolio that they want to share
NEW!! Technical Services Webinar: Capacity Planning in a Tivoli Storage Manager Environment
As much as customers would like to "backup everything and keep it forever", storage is not unlimited. The reality of ever increasing data growth, combined with regulatory compliance and the associated risks make the arduous task of capacity planning for backup ever more critical. A new Reporting and Monitoring tool is available with Tivoli Storage Manager (TSM). This new tool, based on IBM Tivoli Monitoring, can collect and report on historical data and is an integral part of a capacity planning regimen.
This session will demonstrate a capacity planning methodology that conforms to the ITIL Capacity Planning process description by showing how the TSM Reporting and Monitoring tool and other TSM components can be utilized for to ease the pain of capacity planning. Additionally, this session will look at strategies, like data deduplication, to reduce the amount of backup data while maintaining regulatory compliance.
Presenters: Mark Vanderboll, IBM Tivoli Global Response Team Dave Daun, IBM Advanced Technical Skills
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Enter the private storage cloud with its storage service catalog. In the consultative service engagements we’ve done, we have found that most private enterprises have something like fifteen-ish distinct data types (things like database, e-mail, video, shared files, home directories, etc). A simple storage service catalog would describe the specific service levels needed by each of these data types. Let’s take “Database” and build out the scenario.
The first thing you’ll need is a place to create your catalog of storage services. IBM Tivoli Storage Productivity Center Standard Edition is a good option (man, what a mouth full – let’s just call it TPC SE for short… hmm, I’ll probably get fired for that :-) You’re going to use the wizard to create a new “Database” catalog entry.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Storage provisioning is self-service. Most public storage services are targeted at end users like you and me who bring our credit card and provision some storage. Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use. It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
You negotiate a set of storage service levels (like “Database”) with your application owners and business units.
You create the storage service catalog entry for “Database”
Your end users request some new “Database” capacity be assigned to a particular server.
You push the “Run now” button and the capacity is auto-provisioned.
Your end user receives an invoice (complete with individual line items for each class of service in which they are consuming capacity).
You’re in the cloud now!
Stay tuned for Part III of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
I recently read an excellent post by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Efficient storage: Here’s a scenario we see…
Administrators want to snapshot the VMware datastore 4 times a day. 4 days worth are maintained – 16 total snapshots “online”
For longer term recovery, they promote one snapshot each day to a unified recovery manager. 1 month of these are maintained – 31 total snapshots “nearline”
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Simplify Data Protection and Reduce Costs With Unified Recovery Management
On September 22, we will be hosting an educational webcast that will address the challenges of providing data protection and recovery for rapidly growing amounts of diverse enterprise data. During this call, you will hear about our unified recovery management solution that can help reduce complexity, risk and costs. Included in this solution is a new simple, value-based option for procuring and managing software licenses.
Speaker: Rich Vining, Product Marketing Manager
Date: September 22, 2011 Time: 11:00 AM Eastern US
Please register for this event using this link. After registering you will receive a confirmation note with call-in instructions.
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere. (as opposed to physical array-boundary limitations)
Storage services are standardized – selected from a storage service catalog. (as opposed to customized configuration)
Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog. (as opposed to manual component-level provisioning)
Storage usage is paid per use – end users are aware of the impact of their consumption and service level choices. (as opposed to paid from a central IT budget)
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
In Part I of this post:I’ll explain the value of virtualizing storage resources. Hint: you’ve likely already done it to your server resources with some sort of server hypervisor like VMware vSphere, or IBM PowerVM, or Microsoft Hyper-V… so now let’s look at what you get from doing it to your storage resources with a storage hypervisor.
In Part II of this post: I’m going to explain how public storage clouds use management controls like service catalogs, self-service provisioning, and pay-per-use to drive down their costs. I’ll also try to offer some practical ideas for using these techniques in a private enterprise setting to gain similar efficiencies.
In Part III of this post: I’m going to explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
Ready to jump in?
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
In August, Gartner published a paper that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
Perhaps the most obvious expectations are improved efficiency and data mobility. The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration. Just recently, a hurricane hit the eastern cost of the United States. If your datacenter had been in the projected path of that hurricane and if you had implemented both a server hypervisor (let’s say VMware vSphere for your Intel servers and IBM PowerVM for your Power systems), and a storage hypervisor platform (let’s say IBM SVC), then here’s what you might have said: “Hey, the hurricane is coming, let’s move operations to another datacenter further inland…” IBM SVC Stretched-cluster allows you to access the same data at both locations giving you the ability to do an inter-site VMware vMotion and PowerVM Live Partition Mobility (LPM) move – non-disruptively. As far as the end users are concerned, their applications are running in a private cloud. For you… you avoided a disaster and got to sleep well that weekend.
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Thin provisioning: You have a client that asks for 500GB of new capacity. You’re going to give it to him as thin provisioned virtual capacity which is a fancy way of saying you’re not going to actually back it with real physical storage until he writes real data on it. That helps you keep cost down.
Compression: Same guy also asks to keep several snapshot copies of his data for recovery purposes. You’re going to start by giving him thin provisioned capacity for those snapshots, but you’re also going to compress whatever data those snapshots produce – again adding to your efficiency.
Agnostic about vendors: Because you’re providing virtual storage resources from a storage service catalog (we’ll talk more about that in Part II of this post), you have the freedom to shift the physical storage you operate from all tier-1 to a more efficient mix of lower tiers, and while you’re doing it you can create a little competition among as many disk array vendors as you like to get the best price / support.
Smart about tiers: If you shut your eyes real tight and think about the concept of a “virtual” disk that is mobile across arrays and tiers, you’ll quickly start asking questions about having the storage hypervisor watch for I/O patterns on blocks within that virtual disk that would benefit from higher tier capacity, like solid-state (SSD) or flash disk for example. A good storage hypervisor will automate the detection of such patterns and move hot data blocks to these highest tiers of storage if you have them.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
I wanted to let everyone know that IBM Tivoli Storage FlashCopy Manager for Windows Version 2.2.1 was just released!
In June of this year, I blogged about IBM Tivoli Storage FlashCopy Manager version 2.2.0. I talked about how FlashCopy Manager 2.2 provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows 2.2.0 added new support for Microsoft Exchange Server 2010 and Microsoft SQL Server 2008 R2 as well as other enhanced performance and functionality.
We continue to add more functions and features to IBM Tivoli Storage FlashCopy Manager. This past Friday (December 10th, 2010), IBM released IBM Tivoli Storage FlashCopy Manager Version 2.2.1 with the following changes:
Updates Applicable to All Platforms
Support for SVC 6.1
Support for IBM Storwize® V7000
Updates Applicable to all FlashCopy Manager components that run on AIX, Linux, and Solaris
Support for AIX 7.1 in non-SAP environments
Support for Oracle ASM on Solaris and Linux
Support for SVC and IBM Storwize® V7000 Space Efficient target volumes for FlashCopy Manager Cloning operations on AIX, Linux, and Solaris
Updates Applicable to the FlashCopy Manager for Exchange Component
Support for VSS backups to a TSM Server without needing a TSM for Copy Services or FCM license
Support for SVC and DS8000 family devices in a VMware guest OS environment
Improved support for VSS backups in clustered and offload environments
Updates Applicable to the FlashCopy Manager for SQL Component
Improvements to Query & Backup Performance in Environments with Large Numbers of SQL Servers
Support for VSS backups to a TSM Server without needing a TSM for Copy Services or FCM license
Support for SVC and DS8000 family devices in a VMware guest OS environment
Improved support for VSS backups in clustered and offload environments
The Central Depository Company of Pakistan Limited (CDC) is the only depository in Pakistan, handling the electronic settlement of transactions carried out at the country's three stock exchanges.
Business need: With numerous point management tools, time-consuming manual processes and no single help desk, IT administrators were constantly operating in a reactive mode and faced just 90 percent system availability.
Solution: IBM Business Partner Gulf Business Machines helped CDC implement an Integrated Service Management solution from IBM that increases IT efficiency while improving the effectiveness of business services.
Benefits: 90 percent reduction in average time for root cause analysis; estimated 50 percent reduction in time to support new lines of business; 98 percent improvement in service level agreement (SLA) levels.
"IBM Tivoli Storage Productivity Center gave us greater visibility into storage utilization, helping us optimize capacity planning and improve our storage ROI to save 30%" —Syed Asif Shah, Chief Information Officer, Central Depository Company of Pakistan Limited
Read the complete case study for more details on the solutions used for CDC to implement and Integrated Service Management solution. More success stories of other customer implementations of IBM technologies can be found here.