NEW!! Technical Services Webinar: Capacity Planning in a Tivoli Storage Manager Environment
As much as customers would like to "backup everything and keep it forever", storage is not unlimited. The reality of ever increasing data growth, combined with regulatory compliance and the associated risks make the arduous task of capacity planning for backup ever more critical. A new Reporting and Monitoring tool is available with Tivoli Storage Manager (TSM). This new tool, based on IBM Tivoli Monitoring, can collect and report on historical data and is an integral part of a capacity planning regimen. Presenters:
This session will demonstrate a capacity planning methodology that conforms to the ITIL Capacity Planning process description by showing how the TSM Reporting and Monitoring tool and other TSM components can be utilized for to ease the pain of capacity planning. Additionally, this session will look at strategies, like data deduplication, to reduce the amount of backup data while maintaining regulatory compliance.
Mark Vanderboll, IBM Tivoli Global Response Team
Dave Daun, IBM Advanced Technical Skills
Access the webinar here: http://bit.ly/qdOuJU
Did you see that we announced a new version of Tivoli Storage FlashCopy Manager!
Here are the highlights of IBM® Tivoli® Storage FlashCopy® Manager V3.1:
- Advanced data protection and recovery features for VMware vSphere environments
- Enhanced data protection capabilities for Microsoft® Windows®, including support for New Technology File Systems (NTFS) and custom applications, and enhanced user interfaces for Microsoft Exchange and Microsoft SQL Server
- Support for IBM DB2® and Oracle databases (with or without SAP environments) on IBM AIX®, Solaris SPARC, Linux® x64, and HP-UX IA64 platforms
- Support for custom business-critical applications on IBM AIX, Solaris SPARC, Linux x64, HP-UX IA64, and Microsoft Windows platforms
- Transparent integration with IBM storage systems such as IBM System Storage® SAN Volume Controller space efficient FlashCopy target volumes, IBM Storwize® V7000, IBM XIV® Storage System, and IBM System Storage DS8000™
- Can leverage the Microsoft Volume Shadow Copy Services (VSS) framework for integration with non-IBM hardware subsystems
- Database cloning support for UNIX® and Linux clients
It will be generally available on October 21, 2011.
Check out the full announcement here:
Check out the NEW Tivoli Storage Manager v6.3
You should expect more from your storage, and from your storage vendor. On October 11 and 12, IBM is announcing a broad range of new and enhanced storage products that help to meet this challenge.
Included are significant updates to the Tivoli Storage Manager (TSM) family. TSM is already the data protection software leader in scalability, functionality, data reduction, performance and reliability. The v6.3 release will keep us ahead of the competition, and will help to keep you ahead of the challenges you’re facing. Struggling with data growth? No problem.
The scalability of TSM is being doubled for the 3rd year in a row, now supporting up to 4 billion data objects in a single TSM Server. In 2008, the internal database limit was 500 million files, so we’ve made an 8X improvement since then. That means fewer backup servers are needed. And remember that TSM is a single server architecture; we don’t add “media servers” to provide scale.
Unified Recovery Management now includes Replication for faster Disaster Recovery
We’ve been working to simplify the data protection and recovery infrastructure by unifying the management of all the different tools you need for different applications, operating systems, data locations, and data loss causes. In the release of Tivoli Storage Manager Extended Edition v6.3, we’re adding client data and metadata replication to an off-site TSM Server. This provides a “hot standby” disaster recovery capability, managed from within the TSM Admin Center. The replication is asynchronous and can be scheduled on a per client basis to minimize the impact on network bandwidth. And it can be configured in a classic source-to-target model as well as between two active sources, many-to-one, or in a “round robin” architecture.Simplifying the administrator’s life
One of the painful tasks an administrator has to do, especially in large environments, is patching the backup/archive client software on protected systems. With this release, we’re adding the ability to automatically push out client software updates across AIX, HP-UX, Linux and Windows systems (Windows push support was actually introduced last year). This new capability should reduce the time needed to perform an update by at least 80%.Improved integration with VMware
Tivoli Storage Manager for Virtual Environments v6.3, our non-disruptive off-host solution for VMware virtualized servers, now supports VMware vSphere 5 and includes a plug-in for vCenter to easily manage TSM backup and restore operations from within the VMware environment. Tivoli Storage FlashCopy Manager v3.1 is also being released with VMware vStorage APIs for Data Protection integration and the vCenter plug-in to provide hardware-assisted application-aware snapshot management. Support for DB2, Oracle and SAP databases on HP-UX is also added in the new FlashCopy Manager release.Something BIG for mainframe customers
Tivoli Storage Manager for z/OS Media v6.3 is a new connector that enables customers to leverage their mainframe-attached FICON storage devices for storing TSM data. This offering won’t increase the licensing costs for existing customers that move their TSM v5.x Server software from z/OS and install TSM v6.3 Server on an AIX system or a Linux on z partition, and gives them access to all of the cost-saving improvements made in TSM over the past 3 years.The new standard in licensing simplicity
In June we announced the availability of the Tivoli Storage Manager Suite for Unified Recovery. This bundle of ten TSM and FastBack products is simply licensed by the amount of data being managed within the TSM environment, first copy only. We have seen outstanding results from this new offering, from both new and existing customers. The reason is simple: you want to use the right tool for each data protection job, but you don’t want to have to worry about acquiring and managing individual product licenses for each one. This is especially true in virtual server and cloud environments. Added benefit: our broad range of built-in data reduction technologies can dramatically reduce the costs of this offering.
Tivoli Storage Manager Suite for Unified Recovery v6.3 is also being announced, and includes all of the TSM and TSM for Virtual Environments enhancements noted above.What else?
Many other improvements are being introduced across the family, including: better reporting and monitoring; better scalability for Microsoft Windows, Exchange and SQL Server; faster internal processes such as database backup; etc. SAP customers using TSM for Enterprise Resource Planning v6.3 can now do incremental backups on Oracle RMAN.
For more information on the Tivoli Storage Manager enhancements, please refer to the announcement letter on ibm.com (link
To learn more about all the new IBM Storage announcements, please click here
(live 12 Oct.)
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
In continuation to my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray
, I would like to go ahead and call out some of the advanced capabilities that Tivoli Storage Productivity Center
Ron’s recent post on choosing the right storage hypervisor
points out to ‘comprehensive performance monitoring’ as one of the key capabilities you need to successfully deploy cloud storage. This thought reinforces the need for sophisticated tools that can help you significantly reduce the burden on storage configuring (think of best practices) and performance monitoring.
Bottle neck analysis
It’s no longer the network administrator – when the system response is poor, it’s the storage administrator who gets the call. Especially in a virtualized environment, it is essential to have performance monitoring tools that provide a quick yet comprehensive view of the data path – to ascertain any bottlenecks. With Tivoli Storage Productivity Center, you will be able to see where the bottlenecks have occurred, for example one storage subsystem may be over utilized while another is underutilized.
Data Path Explorer offers detailed view of all the storage entities and their connectivity. It provides you performance information across the entire data path – from host to array – and allows to drill down and view performance metrics at the port-level. Standard Edition
, the advanced module within Tivoli Storage Productivity Center, offers advanced reporting capabilities on bottleneck analysis.
According to a storage manager at a leading medical university, “With Tivoli Storage Productivity Center, I can quickly determine if there exists a bottleneck in the SAN infrastructure. Earlier it could take me days or sometimes weeks to figure that out. Now, I can do it in minutes”.
If you have recently deployed Tivoli Storage Productivity Center, make use of IBM’s Value Pack service offerings, which provide analysis of disk subsystem performance bottlenecks using native product tools. Talk to your IBM sales representative or IBM business partner for more information.
Configure your SAN the best practices way
Administrators are expected to ensure high availability for SANs. SAN configuration has traditionally been done manually. But as the complexity in managing the storage network grows, you need sophisticated tools to control and even optimize storage configurations. And adherence to best practices is essential for successful configuration and deployment of complex systems in your storage environment.
I touched upon the SAN Planner topic briefly in my previous post – and would like to delve little deeper in this one. As mentioned earlier, SAN Planner is a wizard-based tool that assists storage administrators in end-to-end planning involving all storage components and related networks. SAN Planner helps implement best practices pertaining to replication relationships; it utilizes current and historical performance metrics to recommend the best configuration while commissioning new storage systems.
There are three planners associated with recommending storage configuration changes, which are based on current workloads, capacity utilization and best practices:Volume Planner
– helps administrators in provisioning storage based on capacity, compression, RAID levels, etc. It includes replication planning as well, supporting sessions such as Metro Mirror, Global Mirror and FlashCopy.
– provides zoning and LUN masking configuration support.
– assists in planning and implementing storage provisioning for hosts and storage systems with mutilpath support in fabrics.
All the three planners can be invoked separately or together in an integrated manner from Tivoli Storage Productivity Center console. Learn more about these planners and their capabilities: download the latest Redbooks
As you can see, configuring SAN with Tivoli Storage Productivity Center is a child’s play, isn’t it? But can you check whether current SAN configuration conforms to best practices? Yes you can!
SAN Configuration Analyzer provides end-to-end check for configuration policies, ensures the correctness of storage network configurations, such as zoning, multipathing and replication. In addition, the tool sends alerts to administrators when the best practices are violated.
Storage networks are undergoing significant changes more often to accommodate changes in business policies and the ever growing data. Administrators are challenged to track configuration changes for problem determination, change management or auditing purposes. Tivoli Storage Productivity Center offers SAN Configuration History Viewer to provide a historic view of changes and eliminate time gap in determining problem areas associated with configuration changes.
To learn more about the IBM Tivoli Storage Productivity Center Select Series, contact your IBM sales representative or IBM Business Partner, or visit ibm.com
to join the virtual dialogue on Storage Hypervisor; share your thoughts and concerns in our group chat on October 7, 2011 from 12 noon to 1pm Eastern Time
. You can log in now for a preview of topics.
In response to: Enabling Private IT for Storage Cloud -- Part II (management controls)
To see a transcript of the live chat held on Friday, September 30th
about this topic visit this link:
And don't forget to listen to the 'open mic' conversation about
Storage Hypervisors with IBM's Ron Riffe, the author of this blog
series, and ESG analyst, Mark Peters:
This is part 3 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I
introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. Part II
explained how a storage service catalog, self-service provisioning, and usage-based chargeback can be used to drive down cost. In this 3rd post, I’m going to share some thoughts that should help you be smarter about choosing a storage hypervisor.
The first step is to remind ourselves what we’re trying to accomplish with a storage hypervisor. From our experience deploying over 7000 storage hypervisors, the starting point for most folks is improved efficiency and data mobility. Remember, the basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, people are looking for the freedom to move a slice (or virtual volume) from tier to tier, from vendor to vendor, and more recently, from site to site all while the applications are online and accessing the data.
To pull off this level of mobility – in servers or storage – it’s important that the hypervisor not be dependant on the underlying physical hardware for anything except capacity (compute capacity in the case of a server hypervisor like VMware, storage capacity in the case of a storage hypervisor). Think about it… Wouldn’t it be odd to have a pair of VMware ESX hosts in a cluster, one running on IBM hardware and one on HP hardware, and be told that you couldn’t vMotion a virtual machine between the two because some feature of your virtual machine would just stop working? If you tie a virtual machine to a specific piece of hardware in order to take advantage of the function in that hardware, it sort of defeats the whole point of mobility. The same thing applies to storage hypervisors. Virtual volumes that are dependant on a particular physical disk array for some function, say mirroring or snapshotting for example, aren’t really mobile from tier to tier or vendor to vendor any more.
But it’s more than just a philosophical issue, there’s real money at stake (you may want to read what comes next a couple of times). In Part II of this post I discussed using a storage service catalog as a means of defining specific service level needs for your different categories of data. These service levels covered the gamut from capacity efficiency and I/O performance (for you techies that’s RAID levels, thin provisioning, use of solid state disks, etc), to data access resilience and disaster protection (multi-pathing, snapshotting, mirroring…). The reason so many datacenters have an over abundance of tier-1 disk arrays on the floor is because, historically, if you wanted to take advantage of things like thin provisioning, application-integrated snapshot, robust mirroring for disaster recovery, high performance for database workloads, access to solid-state disk, etc… you had to buy tier-1 ‘array capacity’ to get access to these tier-1 ‘storage services’ (did you catch the subtle difference?) Now, I don’t have anything against tier-1 disk arrays (my company sells a really good one). In fact, they have a great reputation for availability (a lot of the bulk in these units are sophisticated, redundant electronics that keep the thing available all the time). But with a good storage hypervisor, tier-1 ‘storage services’ are no longer tied to tier-1 ‘array capacity’ because the service levels are provided by the hypervisor. Capacity…is capacity…and you can choose any kind you want. Many clients we work with are discovering the huge cost savings that can be realized by continuing to deliver tier-1 service (from the hypervisor), only doing it on lower-tier disk arrays. As I noted in Part II of this post, we’ve seen clients shift their mix of ‘array capacity’ from 70% tier-1 to 70% lower-tier arrays while continuing to deliver tier-1 ‘storage services’ to their data. This YouTube video
describes an example of that at Sprint.
Smart idea #1: Be careful to understand what, if any, dependency a storage hypervisor has on the capability of an underlying disk array to deliver function to your virtual volumes (like thin provisioning, compression, snapshotting, mirroring, etc.)
Next thought. There are three rather interrelated solution categories in the area of dealing with outages and protecting data.
- Disaster avoidance (“I know the hurricane is coming, let’s move the datacenter further inland”)
- Disaster recovery (“oh oh, the hurricane hit, and my datacenter is dead”)
- Data protection (“oops, I goofed up my data and I need to recover”)
IT managers we talk to have been successfully dealing with disaster recovery (for the techies, that’s array mirroring along with recovery automation tools like VMware Site Recovery Manager
(SRM), IBM PowerHA
, or others) and data protection (that’s array snapshotting along with specific connectors for databases, email systems etc as well as connectors to enterprise backup managers like Tivoli Storage Manager) for years. This third area of disaster avoidance has really gained steam because storage hypervisors now allow you to access the same data at two locations giving you the ability to do an inter-site application migration with things like VMware vMotion
, PowerVM Live Partition Mobility
(LPM), or others. When you are expecting a disaster, disaster avoidance let’s you transparently get out of the way. But it doesn’t magically keep all the other unexpected bad things from happening. You’ll still want to be prepared with disaster recovery and data protection. And if you are implementing a storage hypervisor, you shouldn’t be forced to choose.
Smart idea #2: Remembering smart idea #1, be sure that your storage hypervisor has its own ability to provide for disaster avoidance (inter-site mobility), disaster recovery (mirroring that’s integrated with recovery automation tools) and data protection (snapshotting that’s integrated with applications and backup tools).
One final thought. A storage hypervisor isn’t an island unto itself. Like a server hypervisor, it exists in a broader datacenter. Administrators need to be able to see it in the context of the disk arrays it manages, the servers (or virtual machines) that use its virtual volumes, the applications that need backups or clones, the disaster recovery automation that’s coordinating recovery of servers, storage, networks… You get the picture. When the challenges of day-to-day operations happen (and they do happen most every day)…
- …the storage network planner needs to look at the logical data path from a virtual machine to its physical server, through the SAN switch, to the storage hypervisor and finally to the physical disk array. He’ll need that storage hypervisor to be integrated with a SAN topology tool.
- …an application owner calls up with a performance issue that he’s blaming on ‘the storage’. You’ll need to be able to isolate performance across the whole data path (including the part of the path that goes through the storage hypervisor).
- …an application owner wants a consistent snapshot of his application to use as a backup copy (or a test clone). You’ll need a connector that talks to both the application and the storage hypervisor to identify the virtual volumes that need to be snapshotted, prepare the application for the snapshot, and then provide the application owner with an inventory of snapshots he can use to recover from.
- …you make the move toward cloud techniques in your private datacenter – implementing a storage service catalog, self-service provisioning, and usage-based chargeback. You’ll need a storage hypervisor that can be auto provisioned and that can provide the metrics on who is using how much storage.
Smart idea #3: Make a list of all the day-to-day operational things you do today with physical storage, and the things you hope to automate in the future, and be careful to understand if your storage hypervisor is sufficiently instrumented and integrated – or if it’s creating a new island to be separately managed.
And now a word from our sponsors :-) IBM offers the worlds most widely deployed storage hypervisor. With over 7000 deployments, hundreds in the newer inter-site disaster avoidance configuration, we’ve had a lot of opportunity to build some depth. As you evaluate using cloud storage techniques in your private enterprise, you’ll find things I talked about in this blog series available in IBM products today. They can help you save your company a pile of money (and make you look like a genius while you’re doing it).
Storage hypervisor platform: IBM System Storage SAN Volume Controller
(SVC)Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition
(TPC SE)Usage-based chargeback: Tivoli Usage and Accounting Manager
Thanks for staying with me through this blog series – hope you find it valuable!
The conversation continues!
While storage on cloud is a promising thought, the ensuing complexity associated with monitoring, managing and reporting the storage sprawl acts as a significant deterrent. Organizations need to equip their environment with robust storage resource management application that can withstand business demands, offer comprehensive visibility across the data path, and scale as their storage network expands. Fellow IBMer Ron Riffe
in his recent blog on Storage Hypervisor
wrote about the importance of management controls
and highlighted some of the key capabilities of IBM’s storage resource management offering - Tivoli Storage Productivity Center
Tivoli Storage Productivity Center supports both IBM and other-vendor storage devices that are compliant with SMI-S standards, and offers integrated storage infrastructure management capabilities that simplifies, automates, and optimizes the management of storage devices, storage networks, and capacity utilization of file systems and databases.
I am going to highlight the 4 key capabilities administrators would require should they pursue to put their storage on cloud…
Self service provisioning
Tivoli Storage Productivity Center enables automated discovery and wizard-based provisioning of storage systems, enabling administrators to effectively provision new storage through best-practice methods, often including disaster recovery planning while provisioning.
Tivoli Storage Productivity Center offers the SAN Planner that assists the administrator in end-to-end planning involving fabrics, hosts, storage controllers, storage pools, volumes, paths, ports, zones, zone sets, storage resource groups (SRGs), and replication The SAN Planner provides recommendations for creating and provisioning VDisks with the recommended I/O group and preferred node for each VDisk.
Combining Tivoli Storage Productivity Center with Tivoli Provisioning Manager, storage administrators have a powerful way to simplify the provisioning of storage. Automated work flows can be created that can utilize custom scripts and customer processes, including storage administrator and/or systems administrator sign off. The Tivoli Storage Productivity Center-Tivoli Provisioning Manager native integration enables storage administrators to allow application owners to procure and provision the storage space they need, the type of storage they need, the throughput that is expected and the price specifications to suit their business priorities.
To read more about SAN Planner, click here
Storage tiering reports (to be announced on Oct 14, 2011)
Storage tiering reports was developed by IBM Systems Lab Services under the larger theme known as STAR – Storage Tiering Activity Reporter, which provides decision support for data placement, and enables administrators to optimize resource utilization in terms of capacity and throughput.
Storage tiering reports help administrators to leverage storage virtualization and insights from Tivoli Storage Productivity Center to
• Identify data that could be moved to an alternate tier of storage or less active Managed Disk Group
• Identify the hottest and coolest Managed Disk Groups and Virtual Disks based on performance to assist in up tiering, down tiering and re-tiering decisions
• Provides capability to make “proactive” volume placement decisions
• Understand the performance stress on the hardware in comparison to its capability
A large European Telecommunications giant benefited from a 45% decline in storage acquisition cost in the first deployment translating into 55% discounted cash flow saving for a 4 year TCO evaluation.
Ensuring storage service levels
Continuous performance monitoring and reporting is key to business continuity and maintaining service levels in a cloud environment. Tivoli Storage Productivity Center provides end-to-end visibility to administrators from a single management console, including detailed performance metrics, data path and system connectivity, impact analysis and historical trending.
Administrators can ensure storage devices, storage networks and attached devices are performing in an optimized state, by setting different levels of thresholds for different storage entities based on the criticality of the asset. Alerts are generated when these thresholds are exceeded, duly notifying administers of potential impact and downtime. Tivoli Storage Productivity Center also offers policy-based event action that is based on performance values and business policies.
Tivoli Storage Productivity Center provides storage utilization insights from raw performance data against proven models to predict the utilization of components within the subsystem. This feature provides a unique “heat map” style of display that makes it easier for administrators to narrow in on storage “hot spots”, and thus more easily identify risks and discover both under- and over-utilized areas of the storage infrastructure.
To learn more about Storage Optimizer, click here
In a cloud environment, administrators are challenges to create and manage shared storage services that can be charged back to users based on consumption. When usage varies between departments or businesses, storage administrators require chargeback capabilities in order to simplify departmental allocation and manage capacity utilization.
Through integration with Tivoli Usage and Accounting Manager, Tivoli Storage Productivity Center for Data
enables storage administrators to understand storage usage and perform cost allocation or chargeback users of storage. These solutions help support storage administrators in their efforts to track, allocate and bill different departments and lines of business based on multiple usage metrics. As a result, the organization can better align the storage infrastructure with overall business objectives and be better prepared to meet future requirements.
Tivoli Storage Productivity Center for Data supports multi-tenancy requirements, allowing cloud storage administrators to manage and track storage usage for multiple clients simultaneously. Advanced multi-customer support capabilities enable organizations to charge in different currencies and to charge different rates for the same service, as well as providing cloud consumers with price breakdowns for resources used and resources reserved for future use. The solution supports large data centers as well as public and hybrid cloud environments.Click here
to download the white paper ‘Optimizing capacity and management of file systems and databases’.
The Storage Hypervisor
discussion is gaining momentum. Join the conversation
! The virtual dialogue on this topic will continue in a live group chat on September 30, 2011
from 12 noon to 1pm Eastern Time. You can log in now for a preview of topics.
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I
introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Enter the private storage cloud with its storage service catalog. In the consultative service engagements
we’ve done, we have found that most private enterprises have something like fifteen-ish distinct data types (things like database, e-mail, video, shared files, home directories, etc). A simple storage service catalog would describe the specific service levels needed by each of these data types. Let’s take “Database” and build out the scenario.
The first thing you’ll need is a place to create your catalog of storage services. IBM Tivoli Storage Productivity Center Standard Edition is a good option (man, what a mouth full – let’s just call it TPC SE for short… hmm, I’ll probably get fired for that :-) You’re going to use the wizard to create a new “Database” catalog entry.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Storage provisioning is self-service.
Most public storage services are targeted at end users like you and me who bring our credit card and provision some storage. Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use.
It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
- You negotiate a set of storage service levels (like “Database”) with your application owners and business units.
- You create the storage service catalog entry for “Database”
- Your end users request some new “Database” capacity be assigned to a particular server.
- You push the “Run now” button and the capacity is auto-provisioned.
- Your end user receives an invoice (complete with individual line items for each class of service in which they are consuming capacity).
- You’re in the cloud now!
Stay tuned for Part III
of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor. The conversation is building!
Earlier this week, fellow IBM blogger Tony Pearson joined the conversation with a perspective on Storage Hypervisor integration with VMware
. And IBM blogger Rich Vining added a perspective on VMware Data Protection with a Storage Hypervisor
. To cap it off, we just completed our first live group chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts. Join the conversation!
The virtual dialogue on this topic will continue in another live group chat on September 30, 2011 from 12 noon to 1pm Eastern Time.
I recently read an excellent post
by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Here’s a scenario we see…
- Administrators want to snapshot the VMware datastore 4 times a day. 4 days worth are maintained – 16 total snapshots “online”
- For longer term recovery, they promote one snapshot each day to a unified recovery manager. 1 month of these are maintained – 31 total snapshots “nearline”
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Storage hypervisor platform
: IBM System Storage SAN Volume Controller
(SVC)Storage hypervisor management, storage service catalog, and self-service provisioning
: Tivoli Storage Productivity Center Standard Edition
Join the conversation!
The virtual dialogue on this topic will continue in a live group chat
on September 23, 2011 from 12 noon to 1pm Eastern Time
. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Simplify Data Protection and Reduce Costs With Unified Recovery Management
On September 22, we will be hosting an educational webcast that will address the challenges of providing data protection and recovery for rapidly growing amounts of diverse enterprise data. During this call, you will hear about our unified recovery management solution that can help reduce complexity, risk and costs. Included in this solution is a new simple, value-based option for procuring and managing software licenses.
Speaker: Rich Vining, Product Marketing Manager
Date: September 22, 2011
Time: 11:00 AM Eastern US
Please register for this event using this link.
After registering you will receive a confirmation note with call-in instructions.
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
- Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere.
(as opposed to physical array-boundary limitations)
- Storage services are standardized – selected from a storage service catalog.
(as opposed to customized configuration)
- Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog.
(as opposed to manual component-level provisioning)
- Storage usage is paid per use – end users are aware of the impact of their consumption and service level choices.
(as opposed to paid from a central IT budget)
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
In Part I of this post:
I’ll explain the value of virtualizing storage resources. Hint: you’ve likely already done it to your server resources with some sort of server hypervisor like VMware vSphere
, or IBM PowerVM
, or Microsoft Hyper-V
… so now let’s look at what you get from doing it to your storage resources with a storage hypervisor
In Part II of this post:
I’m going to explain how public storage clouds use management controls like service catalogs, self-service provisioning, and pay-per-use to drive down their costs. I’ll also try to offer some practical ideas for using these techniques in a private enterprise setting to gain similar efficiencies.
In Part III of this post:
I’m going to explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
Ready to jump in?
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
In August, Gartner published a paper
that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
Perhaps the most obvious expectations are improved efficiency and data mobility. The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration. Just recently, a hurricane hit the eastern cost of the United States. If your datacenter had been in the projected path of that hurricane and if you had implemented both a server hypervisor (let’s say VMware vSphere for your Intel servers and IBM PowerVM for your Power systems), and a storage hypervisor platform (let’s say IBM SVC), then here’s what you might have said: “Hey, the hurricane is coming, let’s move operations to another datacenter further inland…” IBM SVC Stretched-cluster allows you to access the same data at both locations giving you the ability to do an inter-site VMware vMotion
and PowerVM Live Partition Mobility
(LPM) move – non-disruptively. As far as the end users are concerned, their applications are running in a private cloud. For you… you avoided a disaster and got to sleep well that weekend.
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
- Thin provisioning: You have a client that asks for 500GB of new capacity. You’re going to give it to him as thin provisioned virtual capacity which is a fancy way of saying you’re not going to actually back it with real physical storage until he writes real data on it. That helps you keep cost down.
- Compression: Same guy also asks to keep several snapshot copies of his data for recovery purposes. You’re going to start by giving him thin provisioned capacity for those snapshots, but you’re also going to compress whatever data those snapshots produce – again adding to your efficiency.
- Agnostic about vendors: Because you’re providing virtual storage resources from a storage service catalog (we’ll talk more about that in Part II of this post), you have the freedom to shift the physical storage you operate from all tier-1 to a more efficient mix of lower tiers, and while you’re doing it you can create a little competition among as many disk array vendors as you like to get the best price / support.
- Smart about tiers: If you shut your eyes real tight and think about the concept of a “virtual” disk that is mobile across arrays and tiers, you’ll quickly start asking questions about having the storage hypervisor watch for I/O patterns on blocks within that virtual disk that would benefit from higher tier capacity, like solid-state (SSD) or flash disk for example. A good storage hypervisor will automate the detection of such patterns and move hot data blocks to these highest tiers of storage if you have them.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Stay tuned for Part II
of this post…
Join the conversation!
The virtual dialogue on this topic will continue in a live group chat
on September 23, 2011 from 12 noon to 1pm Eastern Time
. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
The Global Tivoli User
Community is hosting an all-day, Storage Focused Tivoli User Group event in
New York City
October 11th that will cover:
New Approaches and
Best Practices for Data Lifecycle Management
Date: Tuesday, October
11, 2011 - 8:30 am – 4:30 pm
Location:IBM Office, 590 Madison Avenue, NYC - Briefing Center auditorium 3rd Floor
At this event you
will have the opportunity to learn from experts about and discuss the
- Current developments and future directions for
Tivoli Storage solutions from Steve Wojtowecz, IBM
VP of Tivoli Development
- Best Practices from TSM
customer deployments by event sponsors and IBM
Business Partners Mainline Information Systems and Starfire Technologies
Unified Recovery Management and Capacity Pricing Discussion
- Storage Hypervisor Solution / Storage in the
Continental breakfast and lunch will be served
We hope you will join us to learn about these key storage
topics, as well as hear about real-live customer success stories from our IBM Business Partners.
Space is limited and you must be a TUC
member to attend. Register today to
secure your spot at this special event!
Not a part of the Tivoli User
Community Yet? Become a member.
IBM Champion program is still accepting nominations for experts on IBM Tivoli Software- Nominations are open through August 19.
The IBM Champion program recognizes exceptional contributors to the technical community -- clients and partners who work alongside IBM to build solutions for a smarter planet. An IBM Champion is an individual who leads and mentors his or her peers and motivates them toward IBM solutions and services. Champions can be found running user groups, managing websites, speaking at conferences, answering questions in online forums, writing blogs, submitting wiki articles, sharing how-to videos, and writing technical books.
The IBM Champion program recognizes and thanks these innovative thought leaders, amplifying their voice and increasing their sphere of influence in the technical community. IBMers, partners and clients are encouraged to submit nominations through August 13th. To learn more and to submit your nominations, go to: IBM Champion homepage.
IT managers are broadly exploiting virtual server infrastructures -- hypervisors -- to improve efficiency, provide for transparent mobility, and give common manageability and capabilities regardless of type of server hardware being used. These same robust benefits are now available for virtual storage infrastructures with the IBM storage hypervisor (System Storage SAN Volume Controller and its management console the Tivoli Storage Productivity Center).
Listen to the webcast
to understand how the IBM storage hypervisor can be a complimentary next step in the overall IT environment virtualization process.
Click and learn more about IBM System Storage SAN Volume Controller
and IBM Tivoli Storage Productivity Center
. Please reach out to IBM sales or IBM business partners to understand how IBM storage hypervisor solution can benefit your organization's effort to virtualize and efficiently manage storage.
Is your archived information earning its keep? Are explosive data growth, regulatory compliance, legal discovery, and data protection on your mind? Drivers like these demand long-term, high-volume data retention. Join IBM and IDC on June 8, 2011 at 11AM EST for an informative webinar on how to "Get more from your archived information."
Laura DuBois, IDC Program Vice President, Storage Software, and Craig Butler, IBM Business Line Executive for Storage Archive, Data Protection & Retention, will address a smarter approach to archiving. Find out how companies like yours use the IBM Smart Archive strategy
and lead offerings to help ensure that relevant information is properly retained and protected throughout its life cycle. As part of the Smart Archive strategy, IBM offers IBM Information Archive: a streamlined, flexible archiving solution that helps organizations of practically all sizes address their information retention needs - whether business, legal or regulatory.