Gartner’s Magic Quadrant for SRM and SAN Management software is one of the leading industry publications that provides competitive benchmarking across storage management capabilities and helps support decision making for investments in storage management software. In its latest edition, Gartner positions IBM in the ‘Leader’ quadrant.
IBM Tivoli Storage Productivity Center (TPC) is a clear leader in the SRM market; many enterprises are using TPC today to manage their ever-growing, complex and highly critical storage environments.
TPC is designed to provide comprehensive device management capabilities that include automated system discovery, provisioning, configuration, performance monitoring and replication for storage systems and storage networks. TPC provides storage administrators a simple yet effective way to conduct storage management for multiple storage arrays and SAN fabric components from a single integrated management console.
TPC edges out all other vendors in terms of comprehensively achieving the vision for SRM. TPC provides storage management capabilities that allow administrators to efficiently simplify, centralize, optimize and automate storage management tasks. View the Gartner Magic Quadrant for Storage Resource Management and SAN Management Software, compliments of IBM, here. _________________
If you haven’t unleashed the potential of TPC, watch out for the upcoming version 5.1 release – slated to be announced on June 4, 2012 at the IBM Edge2012. _______________
To learn more, please register for IBM's premier storage conference: IBM Edge2012 being held June 4-8 in Orlando, Florida. This is a 4.5 day conference, 100% focused on IBM storage solutions - with many TPC 5.1 and IBM SmartCloud Virtual Storage Center sessions and customer speakers. Tivoli speakers will be featured throughout the conference and more than 30 sessions will be focused exclusively on Tivoli’s entire suite of products, taught by IBM Distinguished Engineers, leading product experts, clients and partners. Special registration discount applies to all Pulse 2012 attendees! Register here.
Note: This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available here http://www.gartner.com/technology/reprints.do?id=1-1A16V0B&ct=120405&st=sb __________________ Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs
nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data
created and used has been in the form of digital data. Today, the world
produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable
than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This
presents a real concern of a “Digital Dark Age” where digital storage
techniques and formats created today may not be viable in the future as the
technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A
storage tool that was so ubiquitous people still click on this enduring icon to
“save” their digital work and any word, presentation or spreadsheet
documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly
denser than they are today. While new form factors such as solid state disks
will help us provide more stable longer-term preservation of data, and the
promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard
drives and solid-state memory to overcome challenges of growing memory demand
and shrinking devices. Called Racetrack memory, this breakthrough could lead to
a new type of data-centric computing that allows massive amounts of stored
information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical
limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide
structure in midst of the data deluge
Now that we have the capability to preserve our digital
universe, we need to find a way to make it useful. We need to take the next
step past data preservation to data curation. Data curation is the active and ongoing management of data
through its lifecycle. This smarter data categorization adds value to data that
will help glean new opportunities, improve the sharing of information and
preserve data for later re-use. Social media is a great example of the power of curated
data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives
and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting,
appraising and organizing data to make them accessible and interpretable. The
key is bringing data sets together, organizing them and linking them to related
documents and tools. If data can be stored in a way that provides context,
organizations can find new and useful ways to use that data.
3) Storage analytics will open
new business insights
With data curation allowing organizations the platform to
better utilize their data, analytics will help turn that data into intelligence
and, ultimately, knowledge. With the information that historical trending analytics
and infrastructure analytics provides, you can index and search in a more
intelligent way than ever before. By doing analytics on stored data, in backup
and archive, you can draw business insight from that data, no matter where it
exists. The application of IBM Watson technology for healthcare
provides a good example. Watson collects data from many sources and is able to
analyze the meaning and context. By processing vast amounts of information and
using analytics, it can suggest options targeted to a patient's circumstances,
can assist decision makers, such as physicians and nurses, in identifying the
most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we
can learn more with the information we have today to improve service to
customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity
– new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain
industries are able to reach new levels of innovation by having the capacity to
house, organize and instantaneously access information. For example, Hollywood is known for its big budget
blockbusters, but it’s the big storage demands required by new formats such as
digital, CGI, 3D and high definition that’s impacting not just the bottom line,
but studios’ ability to produce these types of movies. Data sets for movies
have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs
as just one day’s worth of filming can generate hundreds of terabytes of data.
The popularity of these high data-generating formats means studios are looking
for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger
data dilemma than the entertainment business. Take a look at the Institute
University of Leipzig, in Germany, which has a major genetic study called LIFE
to examine disease in populations. LIFE is cataloging genetic profiles of
several thousand patients to pinpoint gene mutations and specific proteins.
This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of
data per year. Those figures will only grow with higher-resolution medical
imaging, and new tools or services such as making electronic healthcare records
5) Intervention...The Data
In this era of Big Data, more is always better, right? Not
so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too
much time and money collecting useless or bad data, potentially leading to
misguided business decisions. This practice can be changed with simple policy
decisions and implementing existing capabilities in technologies that exist in
smarter storage, but companies are hesitant to delete any data (and many times
duplicate data) due to the fear of needing specific data down the line for
business analytics or compliance purposes. Part of the solution starts with eliminating the copies.
Nearly 75% of the data that exists today is a copy (IDC). By deleting and
disabling redundant information, organizations are investing in data quality
and availability for content that matters to the business. Consider the effect
of unneeded data, costing money by replicating throughout an organization’s information
systems. This outdated data can also potentially be accessed for fraud.
the quality of data is not costly—not getting it right is.
This is part 3 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. Part II explained how a storage service catalog, self-service provisioning, and usage-based chargeback can be used to drive down cost. In this 3rd post, I’m going to share some thoughts that should help you be smarter about choosing a storage hypervisor.
The first step is to remind ourselves what we’re trying to accomplish with a storage hypervisor. From our experience deploying over 7000 storage hypervisors, the starting point for most folks is improved efficiency and data mobility. Remember, the basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, people are looking for the freedom to move a slice (or virtual volume) from tier to tier, from vendor to vendor, and more recently, from site to site all while the applications are online and accessing the data.
To pull off this level of mobility – in servers or storage – it’s important that the hypervisor not be dependant on the underlying physical hardware for anything except capacity (compute capacity in the case of a server hypervisor like VMware, storage capacity in the case of a storage hypervisor). Think about it… Wouldn’t it be odd to have a pair of VMware ESX hosts in a cluster, one running on IBM hardware and one on HP hardware, and be told that you couldn’t vMotion a virtual machine between the two because some feature of your virtual machine would just stop working? If you tie a virtual machine to a specific piece of hardware in order to take advantage of the function in that hardware, it sort of defeats the whole point of mobility. The same thing applies to storage hypervisors. Virtual volumes that are dependant on a particular physical disk array for some function, say mirroring or snapshotting for example, aren’t really mobile from tier to tier or vendor to vendor any more.
But it’s more than just a philosophical issue, there’s real money at stake (you may want to read what comes next a couple of times). In Part II of this post I discussed using a storage service catalog as a means of defining specific service level needs for your different categories of data. These service levels covered the gamut from capacity efficiency and I/O performance (for you techies that’s RAID levels, thin provisioning, use of solid state disks, etc), to data access resilience and disaster protection (multi-pathing, snapshotting, mirroring…). The reason so many datacenters have an over abundance of tier-1 disk arrays on the floor is because, historically, if you wanted to take advantage of things like thin provisioning, application-integrated snapshot, robust mirroring for disaster recovery, high performance for database workloads, access to solid-state disk, etc… you had to buy tier-1 ‘array capacity’ to get access to these tier-1 ‘storage services’ (did you catch the subtle difference?) Now, I don’t have anything against tier-1 disk arrays (my company sells a really good one). In fact, they have a great reputation for availability (a lot of the bulk in these units are sophisticated, redundant electronics that keep the thing available all the time). But with a good storage hypervisor, tier-1 ‘storage services’ are no longer tied to tier-1 ‘array capacity’ because the service levels are provided by the hypervisor. Capacity…is capacity…and you can choose any kind you want. Many clients we work with are discovering the huge cost savings that can be realized by continuing to deliver tier-1 service (from the hypervisor), only doing it on lower-tier disk arrays. As I noted in Part II of this post, we’ve seen clients shift their mix of ‘array capacity’ from 70% tier-1 to 70% lower-tier arrays while continuing to deliver tier-1 ‘storage services’ to their data. ThisYouTube video describes an example of that at Sprint.
Smart idea #1: Be careful to understand what, if any, dependency a storage hypervisor has on the capability of an underlying disk array to deliver function to your virtual volumes (like thin provisioning, compression, snapshotting, mirroring, etc.)
Next thought. There are three rather interrelated solution categories in the area of dealing with outages and protecting data.
Disaster avoidance (“I know the hurricane is coming, let’s move the datacenter further inland”)
Disaster recovery (“oh oh, the hurricane hit, and my datacenter is dead”)
Data protection (“oops, I goofed up my data and I need to recover”)
IT managers we talk to have been successfully dealing with disaster recovery (for the techies, that’s array mirroring along with recovery automation tools like VMware Site Recovery Manager (SRM), IBM PowerHA, or others) and data protection (that’s array snapshotting along with specific connectors for databases, email systems etc as well as connectors to enterprise backup managers like Tivoli Storage Manager) for years. This third area of disaster avoidance has really gained steam because storage hypervisors now allow you to access the same data at two locations giving you the ability to do an inter-site application migration with things like VMware vMotion, PowerVM Live Partition Mobility (LPM), or others. When you are expecting a disaster, disaster avoidance let’s you transparently get out of the way. But it doesn’t magically keep all the other unexpected bad things from happening. You’ll still want to be prepared with disaster recovery and data protection. And if you are implementing a storage hypervisor, you shouldn’t be forced to choose.
Smart idea #2: Remembering smart idea #1, be sure that your storage hypervisor has its own ability to provide for disaster avoidance (inter-site mobility), disaster recovery (mirroring that’s integrated with recovery automation tools) and data protection (snapshotting that’s integrated with applications and backup tools).
One final thought. A storage hypervisor isn’t an island unto itself. Like a server hypervisor, it exists in a broader datacenter. Administrators need to be able to see it in the context of the disk arrays it manages, the servers (or virtual machines) that use its virtual volumes, the applications that need backups or clones, the disaster recovery automation that’s coordinating recovery of servers, storage, networks… You get the picture. When the challenges of day-to-day operations happen (and they do happen most every day)…
…the storage network planner needs to look at the logical data path from a virtual machine to its physical server, through the SAN switch, to the storage hypervisor and finally to the physical disk array. He’ll need that storage hypervisor to be integrated with a SAN topology tool.
…an application owner calls up with a performance issue that he’s blaming on ‘the storage’. You’ll need to be able to isolate performance across the whole data path (including the part of the path that goes through the storage hypervisor).
…an application owner wants a consistent snapshot of his application to use as a backup copy (or a test clone). You’ll need a connector that talks to both the application and the storage hypervisor to identify the virtual volumes that need to be snapshotted, prepare the application for the snapshot, and then provide the application owner with an inventory of snapshots he can use to recover from.
…you make the move toward cloud techniques in your private datacenter – implementing a storage service catalog, self-service provisioning, and usage-based chargeback. You’ll need a storage hypervisor that can be auto provisioned and that can provide the metrics on who is using how much storage.
Smart idea #3: Make a list of all the day-to-day operational things you do today with physical storage, and the things you hope to automate in the future, and be careful to understand if your storage hypervisor is sufficiently instrumented and integrated – or if it’s creating a new island to be separately managed.
And now a word from our sponsors :-) IBM offers the worlds most widely deployed storage hypervisor. With over 7000 deployments, hundreds in the newer inter-site disaster avoidance configuration, we’ve had a lot of opportunity to build some depth. As you evaluate using cloud storage techniques in your private enterprise, you’ll find things I talked about in this blog series available in IBM products today. They can help you save your company a pile of money (and make you look like a genius while you’re doing it).
Royse Wells, Chief Storage Architect for International Paper discusses Tivoli Storage Manager Operations Center, previewed at Pulse 2013.TSM Operations Center is a new graphical interface that helps administrators and management get quick summary views about the backup environment and simplify administration.
Jeff Jones, UNUM
UNUM Uses Tivoli Storage Manager for Virtual Environments
Jeff Jones is senior infrastructure manager at UNUM, a leading provider of financial protection benefits in the US and UK.UNUM has about 85% virtual servers today.UNIM uses Tivoli Storage Manager for Virtual Environments to deliver faster backups and restores, and reduce the risk of data loss for 650 Windows and Linux VMs.
Klavs Kabell, IT-WIT
Modernizing Backup for Today’s Virtual Environments
Klavs Kabell is a Senior System Consultant at IT-WIT, an IBM Business Partner in Denmark specializing in backup solutions.Klavs discusses how backup solutions have evolved, as VM deployments have grown.Tivoli Storage Manager for Virtual Environments helps simplify VM backup administration and tracking, while incremental ‘forever’ technology improves storage efficiency.
Thomas Bak, Front-safe
Cloud backup and archive using TSM and Frontsafe Portal
Front-safe received the Best Cloud Solution award at the IBM Pulse 2013 conference, and the 2013 IBM Beacon Award for the Best Solution to Optimize the World’s Infrastructure.Learn about the value of enabling backup as a cloud service, using Front-safe Portal software.
Laura DuBois, Program VP of Storage for IDC, and Steve Steve Wojtowecz, IBM VP of Storage and Networking Software discuss client opportunities and requirements for storage clouds and compute clouds.Client cloud storage requirements include backup and archive clouds, file storage clouds, and storage that supports compute clouds.
Chris Dotson, IBM CIO Office
IBM’s storage transformation featuring SmartCloud Virtual Center
Chris Dotson works in IBM’s CIO Office as a Senior IT Architect for Services Transformation.He is guiding IBM’s own storage transformation.As a large enterprise, IBM manages over 100 petabytes of data, growing at 25% per year.Chris discusses block storage virtualization, automated block storage tiering, file cloud storage, and automated block storage management at IBM.He shows how SmartCloud Virtual Storage Center is reducing storage costs by 50% with no noticeable performance impact to users.
BJ Klingenberg, IBM Global Technology Services
IBM Global Technology Services Uses Tivoli Storage Productivity Center (TPC) to Manage Clients’ Storage Environments
BJ Klingenberg is a Distinguished Engineer and Enterprise Storage Management lead for IBM.BJ shares his experiences using IBM Tivoli Storage Productivity Center in IBM’s Service Provider environment. Service Provider environments are governed by Service Level Agreements, so managing capacity, performance and availability are essential capabilities. Storage efficiency is essential to remaining competitive.See how TPC helps IBM deliver outstanding customer service at competitive prices.
Jason Buffington, ESG Senior Analyst, and Tom Hughes, IBM Worldwide Storage Executive discuss business and technical challenges for data protection.Tom and Jason discuss new solutions and Best Practices for protecting data more efficiently and effectively for today’s cloud, mobile and virtual environments.
Colin Dawson, TSM Server Architect introduces Tivoli Storage Manager Operations Center, the next generation of backup administration from IBM. He describes how TSM Operations Center was designed and built, using extensive user feedback.
Jonathan Bryce, OpenStack Foundation founder and Todd Moore, IBM
OpenStack Provides Compute, Storage and Network Interoperability for Clouds
The OpenStack Foundation has gained 170 corporate and over 8,200 individual members since its inception in 2012, making it one of the fastest growing cloud standards.Jonathan Bryce, Executive Director and founder of the OpenStack Foundation, and Todd Moore, IBM Software Group Director of Interoperability and Partnerships discuss the capabilities and opportunities for building cloud solutions using OpenStack to manage compute, storage and network resources.
Deepak Advani, IBM
Optimizing IT Infrastructures for Today’s Workloads
Deepak Advani, General Manager of Tivoli Software discusses top issues and opportunities facing clients as they adopt new breeds of applications to engage with customers and improve operations using mobile devices, cloud and analytics.
IBM has detailed innovative projects and research that show new
storage approaches to support Big Data growth and drive business innovation.
Healthcare, financial services, media and entertainment, and
scientific research among many industries face the challenge of storing and
managing the proliferation of data to extract critical business value. As
storage needs rise dramatically, storage budgets lag, requiring new innovation
and approaches around storing, managing, and protecting Big Data, cloud data,
virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute
for Computational Cosmology (ICC) at Durham University in the U.K. and Business
Partner OCF to build a storage system to better store and manipulate Big Data
for its cosmology research on galaxies. ICC is adopting the same IBM General
Parallel File Systemtechnology used in the
IBM Watson system to store and manage more than one petabyte of data from two
significant projects on galaxy formation and the fate of gas outside of
galaxies. The enhanced storage system will enable up to 50 researchers, working
collaboratively to access and review data simultaneously. It will also help ICC
learn to manage data better, storing only essential data and storing it in the
New Storage Platform Delivers More Personalized, Visual
Healthcare: A medical archiving
solution from IBM Business Partners Avnet Technology Solutions and TeraMedica,
Inc. powered by IBM systems, storage and software gives patients and caregivers
instant access to critical medical data at the point-of-care. Developed in
collaboration with IBM, the medical information management offering can manage
up to 10 million medical images, helping health care practitioners provide
better patient care with greater efficiency and at reduced costs. The
integrated platform allows users to manage and view clinical images originating
from different treatments and providers to bring secure, consistent image
management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a
general medical and surgical hospital in Visalia, Calif., needed to reduce its
operational costs while increasing storage space. To meet these demands, KDHCD
tapped IBM's storage systems to create a new storage platform that reallocates
resources and saves a significant amount of data space with thin-provisioning
technology. Virtualization creates a smaller hardware footprint so the hospital
also saved on power and cooling costs. KDHCD now has a consolidated storage
environment that provides the scalability, ease-of-management, and security to
support critical healthcare data management for the hospital.
Following an outstanding PurePalooza party on Monday night that featured a 2-hour performance by 6-time Grammy Award winner Carrie Underwood, you might have expected Tuesday’s General Session to be a little quieter than usual. However, that wasn’t the case at all as the energetic vibe from today’s session picked up right where Monday left off -- helping to quickly shake off the effects of a wild Monday night for many.
This morning’s 90-minute general session was themed “Best Practices in Action” and featured a client panel of IT leaders from AT&T, Equifax, Carolinas Healthcare System and the Port of Cartagena sharing how they are converting opportunities from Cloud, Mobility and Smarter Physical Infrastructures into tangible business outcomes.
The Unified Recovery & Storage Management track picked up on the General session theme with Tuesday’s breakout sessions featuring no fewer than TEN Tivoli Storage clients sharing real-life examples of how they were applying IBM Tivoli Back-Up and Recovery and Storage Management solutions to address a host of complex challenges. While this represents a just a tiny sliver of the valuable content, some of the session take-aways included:
• Irfan Karachiwala (Ph. D), Manager, Enterprise Data Strategy at Kindred Healthcare, a post-acute healthcare provider with over 450 locations in the U.S. has realized improvements in Recovery Point and Recovery Time Objectives by switching from data only backups to VM-based image back-ups using Tivoli Storage Manager for Virtual Environments;
• Redbook author Gerd Becker of Empalis Consulting, a German-based IBM Premier Business Partner recommended the use of TSM Fastback for Workstations to provide continuous protection and meet shorter Recovery Point Objectives (by the way you can try TSM Fastback for Workstations for free through the currently available trial);
• BJ Klingenberg, Distinguished Engineer in IBM Global Technology Services that uses Tivoli Storage Productivity Center to manage the storage environments of over 1000 accounts and 400 petabytes of data suggested taking 12-hour Storage environment snapshots to facilitate problem isolation and determination as part of a sound change and configuration management strategy;
• Petur Eythorsson of Nyherji, a Managed Service Provider in Iceland that provides management for 50 TSM servers mainly supporting mid-sized Windows-based environments confirmed that like many others, his client base has little patience for any recovery time beyond 2 hours;
• Huey Cantrell of Blue Cross Blue Shield of Louisiana reminded us that the overwhelming number of restore requests are for data that was recently backed-up so his IT organization spends it’s time and energy focused on the ability to quickly recover data created in the past few days;
• Eduardo Zalamena of Mitsubishi Motors of Brasil pointed out that within large organizations you can’t treat all data the same way. For example, a 2-hour restore time within some systems can be a catastrophic while for others it may be very appropriate. Eduardo recommended the implementation of system-specific recovery objectives to cost-effectively address requirements;
• John Clarke from United Healthcare that protects over 70 million Americans, has altered his teams approach back-up and recovery focus – emphasizing the restore over back-up primarily because of the emergence of Big Data systems such as Netezza and Teradata.
On a day that put IBM clients “front and center”, it was only fitting to close Tuesday with the Tivoli Storage Birds of Feather meeting. This two hour, highly-interactive discussion gave clients the opportunity to get all their questions answered and provide direct feedback to Tivoli Storage Executives, Developers and Product Managers.
Based on the buzz around the Storage breakouts it was clear that the client focus on Tuesday was a hit so a huge thank-you to all the clients who took the time to prepare and share their stories at Pulse2013. Pulse wouldn't be a reality without your contributions!!!
As another Pulse begins to wind down, it’s time to start thinking about IBM Edge2013 in June. The Edge conference will bring us back to Las Vegas to hear more clients describe how they are Optimizing Storage and IT. If you weren’t able to join the 8000 of us at Pulse2013, start making plans to attend Edge by finding out everything you need to know (including the early-bird discount available through the end of April) at the IBM Edge2013 Conference website.
In many organizations today, storage replication is riddled with manual errors and/or poorly written in-house scripts that often provide no view of overall copy environment status. Additionally, setup and ongoing management of large-scale copy services is increasingly becoming cumbersome. Tivoli Storage Productivity Center (TPC) enables simplified yet comprehensive control over replication process. With the release of TPC v5.1 in June 2012, the replication management capabilities are now well integrated into the TPC core license.
TPC extends support for FlashCopy, Metro Mirror, Global Mirror, and Metro Global Mirror sessions. While providing central view of the replication environment, TPC provides end-to-end management and tracking of copy services, including both planned and unplanned disaster recovery procedures. In addition, TPC enables practice volume sessions that allow storage managers to test their DR environment without interfering with daily DR operations.
The following new capabilities were added to TPC v5.1:
Failover operations that are managed by other applications Applications such as the IBM Series i Toolkit, VMware Site Recovery Manager, and Veritas Cluster Server manage failover operations for certain session types and storage systems. If an application completes a failover operation for a session, the ‘Severe’ status is displayed for the session. An error message is also generated for the role pairs for which the failover occurred.
Additional support for space-efficient volumes in remote copy sessions You can use extent space-efficient volumes as copy set volumes for the following IBM System Storage® DS8000® session types: • FlashCopy® (System Storage DS8000 6.2 or later) • Metro Mirror (System Storage DS8000 6.3 or later) • Global Mirror or Metro Global Mirror (System Storage DS8000 6.3 or later)
Reflash After Recover option for Global Mirror Failover/Failback with Practice sessions You can use the Reflash After Recover option with System Storage DS8000 version 4.2 or later. Use this option to create a FlashCopy replication between the I2 and J2 volumes after the recovery of a Global Mirror Failover/Failback with Practice session. If you do not use this option, a FlashCopy replication is created only between the I2 and H2 volumes.
No Copy option for Global Mirror with Practice and Metro Global Mirror with Practice sessions You can use the No Copy option with System Storage DS8000 version 4.2 or later. Use this option if you do not want the hardware to write the background copy until the source track is written to. Data is not copied to the I2 volume until the blocks or tracks of the H2 volume are modified.
Recovery Point Objective Alerts option for Global Mirror sessions You can use the Recovery Point Objective Alerts option with IBM TotalStorage Enterprise Storage Server® Model 800, System Storage DS8000, and System Storage DS6000™. Use this option to specify the length of time that you want to set for the recovery point objective (RPO) thresholds. The values determine whether a Warning or Severe alert is generated when the RPO threshold is exceeded for a role pair. The RPO represents the length of time in seconds of data exposure that is acceptable in the event of a disaster.
Wow!What an exciting week it has
been at Pulse 2013!Especially, for Tivoli Storage!In addition to the inspiring words of Day 3’s
keynote speaker, Peyton Manning, I was also equally inspired by many of our
Tivoli Storage Business Partners like Cobalt Iron, Silverstring, Front-Safe and
STORServer, who led sessions on exciting topics like how to create a cloud
service in a TSM environment, to how to transform your data backup costs into
Lets start with the final day of Pulse General
Sessions which kicked off to a packed auditorium…..Jamie Thomas, IBM
Tivoli VP of Strategy and Development, took the stage first with a panel of IBM
Experts including CTOs Dave Lindquist, Jerry Cuomo, Sandy Bird,
and Sky Matthews.These key
technology leaders, with Jamie facilitating the discussion, led us through
their technology roadmaps around what’s new and what’s coming in Cloud,
Security Intelligence, the Mobile Enterprise and Smarter Physical
Infrastructure.Following this panel
discussion, Bruce Ross, General Manager for IBM GTS, helped explain how
his team is helping to enable the acceleration of cloud services.It was a great line-up of experts and many
shared examples of how our technologies are driving innovation.You canwatch the replay of the General Session here.
Next up on the main stage was Pulse
2013 Guest Speaker Peyton Manning of the Denver Broncos.Peyton gave a heartfelt speech on the “art
and science of decision-making.”Are you
making the right decisions to deliver innovation?Are you sticking to your decisions?...were
some of the key topics covered.Offering
his perspective on Leadership, I think my favorite all-time quote was
this:“You can either be a warrior or a
worrier.”So true.Decisions backed by facts and data analysis
helps you drive the best decisions, and technology has greatly impacted this
process, he pointed out.Scott Hebner,
WW Tivoli VP of Marketing, joined him onstage with some great Q&A, that
ended with Scott going long, “up and out,” to catch a bullet pass from
Peyton.And, yes, pass caught, no
Now, on to our Tivoli Storage
sessions today which featured many of our Business Partners.Thomas Bak, CSO & Partner of
Denmark-based Front-safe, kicked off the morning with a very interactive
discussion on the topic of how to create cloud service around your Tivoli
Storage Manager environment.Front-safe,
a recent 2013 Beacon Award winner, is bringing TSM into new markets via a
cloud-enabled portal.He mentioned 3,000
end customers are already using this solution for backup, and backing up
literally 10,000+ servers!Front-safe’s
new cloud backup service provider, i-Sanity, also addressed their model of
“backup as a service” and they are the first Front-safe backup service provider
in South Africa.You can learn more by watching this great
Silverstring Ltd, another Tivoli
Storage Business Partner, led a session with customer Rabobank International, a
large global financial institution with many dispersed TSM systems, who told us
all about best practices that have been used for daily TSM administration.Great content was also shared that focused on
how these best practices and cloud-based automation software can be combined to
actually lower the cost of delivering TSM services and improve service levels.
Later in the
day, Richard Spurlock of Cobalt Iron
held an engaging session on how to transform your backup costs into business
opportunities.Cobalt Iron combines TSM
backup with a cloud experience in a simple deployment model that’s all about
flexible deployment options.Richard
really helped the audience better grasp how the costs and complexity of
enterprise backup can really “bankrupt” its value – and how Cobalt Iron’s
solutions can leverage your backup investments into a flexible and high
value data protection solution.Earlier
in the week, Cobalt Iron had been honored as a recipient of the 2013 Tivoli
Business Partner Awards finalist. Congratulations!
In one of the final Pulse 2013
storage sessions of the day, Business Partner STORServer delivered a compelling
presentation on how to provide Backup-as-a-Service with their STORServer Backup
Appliance and TSM.This session was of
great help for both large enterprises who were looking for how best to charge
for backup services, and also MSPs looking for additional revenue streams.
We also heard from
customers like Nyherji who told the audience all about how they use FlashCopy
Manager with TSM Node Replication to increase service levels and obtain high
availability.I was especially interested to hear about all the stellar benefits that these Tivoli storage solutions brought to their
business – from absolutely 0 downtime to hugely improved backup and recovery
times.As a result, Petur Eythorsson of
Nyherji, told a great story of how they completely redesigned their TSM
environment and added TSM Node Replication and FlashCopy Manager to complete
I wrapped up my week at Pulse
2013 visiting with both old and new friends and colleagues later that evening,
continuing to recap what was my 3rd, and, I believe, BEST Pulse
ever!Congratulations and thank you to the IBM Tivoli Pulse team for a job well done.....I can’t imagine how the next Pulse
will trump this one, but, in true Tivoli
fashion, I am sure it will.
And, in case you are having Pulse 2013 withdrawals already, we’ve captured some engaging storage
videos this week that are already available to you now. I hope you can take a moment to relive some
of the great Tivoli Storage moments this week and listen to all the great things that analysts, thought leaders, our customers, and Business Partners are saying
about Tivoli Storage solutions:
And, in case you would like to hear more about what’s
new and cool coming from Tivoli Storage, you can always join us again in Vegas
this June for IBM Edge2013, which will
bring you more opportunities to connect with your colleagues and learn about
industry best practices for storage management, virtualization, and cloud technologies.
Posted on behalf of Martine Wedlake, Ph.D., Storage UI Architect, IBM Software Group
From talking with customers, we know that it's really important that you find what you need quickly and easily. The original navigation structure for IBM Tivoli Storage Productivity Center (TPC) was built around a resource explorer model -- very much like a windows file explorer. This unfortunately, means you can have a ton of entries in the navigation that you'll need to hunt through to find anything.
For example, I took a look at one of our TPC deployments in the lab and started counting the number of clickable entries in the navigation -- I stopped counting once I got to 1000. Based on how far I got through, I'd say that there were about 1500 or so. I should point out, that this is not a particularly large deployment -- 25 storage systems, 7 servers, 4 hypervisors, 5 fabrics with 46 switches. You can expect a much larger set of entries on larger deployments.
So, we knew pretty early on that we needed to improve the navigation. To do that we switched from a resource explorer view to a by-category view. This allowed us to dramatically simplify the navigation to only 13 high-level categories and no more than two levels deep. No more hunting and pecking to find what you want!
We also made it possible to directly link to the things you want without having to go through the navigation at all. For example, for an SVC storage system's detail page you can link directly to the set of backend controllers in your environment consuming the storage. You don't need to go back out to the navigation menu and then try to track down the servers all over again. Here’s a picture:
The overall concept is whenever you see something interesting; you should be able to drill-down into it. In addition to the navigation of the product, we've spent considerable effort to make the content of the user interface easier and more intuitive; and to make it consistent with the work we had done previously on the Storwize V7000 and SAN Volume Controller user interfaces – if you've seen one of our GUIs, you'll be able to get up to speed quickly on any of the others.
To that end, we borrowed significantly from the Storwize V7000 GUI, for example: configurable tables, visual theme, embedded help system, charting and general icons. Here’s a screenshot of Storwize V7000 GUI to help show the similarities:
Beyond these cosmetic enhancements, we spent a lot of time working with our stakeholders to deliver the content in an intuitive and simplified way. Knowing what to put on the pages and how to simplify the pages involved a dramatic shift in our development process. But, before I move on to that, I really need to highlight the improvements made with reporting.
In this release, we've embraced the Tivoli Common Reporting which includes IBM Cognos. This is a huge step forward for improving your ability to view and create reports for TPC.
To start with, you will not need to know SQL or database schema to create reports -- the drag and drop interface allows you to simply incorporate the data columns you wish to show and Cognos already understands the relationships between the entities. For example, let's say you want to show the volumes connected to a given server. In Cognos, you simply add columns for the Server Name and the Storage Volume into the canvas. The tool already understands the relationships between these entities and will automatically join the data appropriately to show which volumes are mapped to which servers.
Of course, we also provide upwards of 45 or so reports out of the box for those who don't wish to create reports themselves. Another neat feature is that the reports included with TPC can be copied and edited by the built-in editor tool within Cognos, so you can take one of our reports and modify it to your liking. Here are some of the reports that are included with TPC:
Working with our customers is part of the most rewarding aspects of my work here at IBM. For this release of TPC, we employed a radically different development model from what we have done in the past. We like many in the industry used to develop in a methodology called waterfall where requirements are captured and approved at the beginning of the project which leads to a phase of high-level and low-level designs, leading eventually to development, test and delivery to customer. For this release of TPC, we wanted to include customer input throughout the development cycle -- not just at the beginning when collecting requirements.
As such, we hosted 17 sessions with 34 customers, 7 business partners and 4 internal customers spread throughout the development cycle (held monthly). We also sent developers and GUI designers out into the field to talk directly with 7 customers. From these combined sessions, we captured 261 distinct requirements and were able to contain 188 of them within the first phase of development, with 47 being deferred into the second phase. That means that 72% of the requirements are already implemented in the first phase alone, and 90% of the requirements are expected to be implemented by the second phase. This is very impressive, as compared against traditional waterfall development.
The best part of an iterative, agile approach is that we are constantly evaluating the effectiveness of the solutions. We learn right away if something isn't quite hitting the mark, and have plenty of time to make changes to improve it.
As a quick plug, it is not too late to participate in our Early Adopter Program for the next phase of TPC. Please feel free to contact me directly (firstname.lastname@example.org) if you would like to participate. We would love to work with you to make TPC even better.
I have been working in storage and storage management my entire career (which has been more years than I want to admit) and I was recently advised by a wise co-worker to start writing about it. Although blogging has been around for quite some time and has certainly increased in popularity in recent years, this is the first time I have braved this form of communication. I stared at a blank blinking cursor for inspiration and decided to write about one of my favorite storage products, the Tivoli Storage Productivity Center.
Several weeks ago IBM announced the new 4.2 release of Tivoli Storage Productivity Center. This release includes some interesting enhancements that I am excited to see in the product. One feature that has received a lot of buzz is the lightweight storage
resource agents. TPC started down the path of lighter agents when they
introduced a slimmer, but not completely lightweight version of the
agents by moving from Java to C for enhanced performance. These new
agents were limited to Windows, AIX, and Linux. The new 4.2 release
added HP-UX and Solaris support as well as support for file and
database-level management. The new release is backward compatible
meaning that customers who want to continue using agents they set up
previously can do so. New customers are no longer required to use the
Common Agent Manager.
TPC 4.2 has introduced full support for XIV devices. TPC 4.1 did have
toleration support for XIV (basic discovery and capacity information),
but the new release you can provision, get performance information, and
use the data path explorer for your XIV machine.
If you have TPC deployed on a System Storage Productivity Center (SSPC) can upgrade at any time. Customers buying a new SSPC machine after September 3, 2010 will automatically have TPC 4.2 pre-installed on the machine.
I could say a lot more about the new TPC 4.2 release, but instead I am going to point you to a wonderful blog entry that my colleague, Tony Pearson, wrote when the new release was announced. He provides some great insights about the new features in TPC 4.2.
Wow - I made it to the end of my first blog and I am beginning to understand why blogging has become so popular. I am starting to wonder why it took me so long to write my first blog?
IBM Edge2013 is fast approaching and while the conference includes four events within an event to appeal to wide range of attendees, the cornerstone of Edge from my perspective is the rich technical content to be delivered within Technical Edge.
TechnicalEdge is a 4.5 day technical event for IT professionals and practitioners focused on sharpening expertise, discovering new innovations and learning industry best practices. You can check out the published agenda of the over 350 sessions spanning 16 tracks Technical Edge that are sure to hit on the top IT trends, opportunities and challenges we collectively face.
Specifically related to Cloud & Smarter Infrastructure, we’ve embedded over 50 technical sessions, demos and hands-on labs specifically focused on Tivoli solutions with the majority going deep on Tivoli Storage capabilities. Further there’s an additional 30 related sessions of interest to Tivoli users (i.e. IBM Storwize V7000, IBM Flex System Manager, IBM GPFS, etc.). These 80 sessions are scattered across the 16 tracks within the Technical Edge conference. (Hint: You can find the majority of these sessions within the Business Continuity and Systems Management tracks)
Some of the session highlights I’m looking forward to seeing are:
“IBM's New Tivoli Storage Manager Operations Center” – Our new TSM GUI!
“IBM Tivoli Storage Manager and the Cloud” – This session will describe TSM’s multifaceted cloud strategy
“Protection of Virtual Machines using Tivoli Storage Manager for Virtual Environments and Tivoli Storage FlashCopy Manager”
Tivoli Storage Manager for Virtual Environments - Data Protection for VMware: Solution Design
Introduction to IBM's Virtual Storage Center (VSC) - Learn how you can gain storage efficiencies and grow your business using VSC’s capabilities.
How IBM SVC, Storwize V7000 and TotalStorage Productivity Center are used in real life to migrate data centers
Additionally, for those that want to roll up their sleeves and get their hands on some of these solutions, I would recommend the following hands-on labs:
IBM’s New Tivoli Storage Manager Operations Center hands-on lab
IBM Tivoli Storage FlashCopy Manager: New Features and Operation in Version 3.2
A double session - IBM Tivoli Storage Manager for Virtual Environments: Protecting and Recovering Virtual Machine Data
We will also have an interesting “Birds of a Feather (BOF)” session on Business Continuity led by Sanjay A. Patel – focused on using the Tivoli Storage Manager suite to help you proactively protect your data.
I encourage you to join us at the Technical Edge and enhance your knowledge of our Tivoli Solutions and look forward to “getting technical” in Vegas. Learn more and register today.
Data centers across enterprises are witnessing unprecedented data growth, which translates to increased costs and management complexity. One of the leading analyst groups, the Evaluator Group, has analyzed the storage resource management (SRM) software space and created a detailed insight report to outline how the SRM segment is evolving, and why it is important that storage managers need to have a storage strategy aided with a comprehensive tool such as IBM Tivoli Storage Productivity Center to better manage future challenges.
What's more? In addition to simplified management of capacity, performance and provisioning of storage infrastructure, Tivoli Storage Productivity Center V5.1 also enables comprehensive storage replication management.
In many enterprises today, storage replication is riddled with manual errors and/or poorly written in-house scripts that often provide no view of overall copy environment status. Additionally, setup and ongoing management of large-scale copy services is increasingly becoming cumbersome.
To learn more about TPC's advanced replication management capabilities, tune into the upcoming webinar "Simplified storage replication for high data availability" through Tivoli User Community on Nov 13, 2012 at 11AM ET. Click here to register for this event.
Mike Griese, TPC Product Manager, presented Tivoli Storage Productivity Center v5.1 to a huge gathering at IBM Edge on the opening day. The video is now available on Youtube. ___________________________________________________________ To view more videos from IBM Edge, visit: http://www.youtube.com/user/IBMEDGE2012 ___________________________________________________________________________________________________
Storwize Rapid Application Storage solution, launched in Feb 2011, brings together innovative storage technology, comprehensive management software and implementation services that help you manage business applications and storage growth efficiently. In continuation to my earlier post 'Tivoli Storage Productivity Center supports Storwize V7000', my colleague Ian Wright has created a brilliant video showcasing the performance monitoring, alerting and reporting capabilities of Tivoli Storage Productivity Center as part of the Storwize Rapid Application Storage solution.
In specific, Ian walks us through creating performance reports at sub-system level to understand the data surge source; creating batch reports that enable better understanding on the storage capacities, including tiering information; and also, enabling thresholds that will eventually create alerts when these thresholds are breached.
For more information on Storwize Rapid Application Storage, click here. To learn more about Tivoli Storage Productivity for Disk Midrange Edition, click here.
Please reach out to IBM Sales Specialists or IBM Business Partner to understand how Storwize Rapid Application Storage solution can benefit your organizations’ efforts towards efficiently managing the data explosion.
Note: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.