Did you see that we announced a new version of Tivoli Storage FlashCopy Manager!
Here are the highlights of IBM® Tivoli® Storage FlashCopy® Manager V3.1:
- Advanced data protection and recovery features for VMware vSphere environments
- Enhanced data protection capabilities for Microsoft® Windows®, including support for New Technology File Systems (NTFS) and custom applications, and enhanced user interfaces for Microsoft Exchange and Microsoft SQL Server
- Support for IBM DB2® and Oracle databases (with or without SAP environments) on IBM AIX®, Solaris SPARC, Linux® x64, and HP-UX IA64 platforms
- Support for custom business-critical applications on IBM AIX, Solaris SPARC, Linux x64, HP-UX IA64, and Microsoft Windows platforms
- Transparent integration with IBM storage systems such as IBM System Storage® SAN Volume Controller space efficient FlashCopy target volumes, IBM Storwize® V7000, IBM XIV® Storage System, and IBM System Storage DS8000™
- Can leverage the Microsoft Volume Shadow Copy Services (VSS) framework for integration with non-IBM hardware subsystems
- Database cloning support for UNIX® and Linux clients
It will be generally available on October 21, 2011.
Check out the full announcement here:
Check out the NEW Tivoli Storage Manager v6.3
You should expect more from your storage, and from your storage vendor. On October 11 and 12, IBM is announcing a broad range of new and enhanced storage products that help to meet this challenge.
Included are significant updates to the Tivoli Storage Manager (TSM) family. TSM is already the data protection software leader in scalability, functionality, data reduction, performance and reliability. The v6.3 release will keep us ahead of the competition, and will help to keep you ahead of the challenges you’re facing. Struggling with data growth? No problem.
The scalability of TSM is being doubled for the 3rd year in a row, now supporting up to 4 billion data objects in a single TSM Server. In 2008, the internal database limit was 500 million files, so we’ve made an 8X improvement since then. That means fewer backup servers are needed. And remember that TSM is a single server architecture; we don’t add “media servers” to provide scale.
Unified Recovery Management now includes Replication for faster Disaster Recovery
We’ve been working to simplify the data protection and recovery infrastructure by unifying the management of all the different tools you need for different applications, operating systems, data locations, and data loss causes. In the release of Tivoli Storage Manager Extended Edition v6.3, we’re adding client data and metadata replication to an off-site TSM Server. This provides a “hot standby” disaster recovery capability, managed from within the TSM Admin Center. The replication is asynchronous and can be scheduled on a per client basis to minimize the impact on network bandwidth. And it can be configured in a classic source-to-target model as well as between two active sources, many-to-one, or in a “round robin” architecture.Simplifying the administrator’s life
One of the painful tasks an administrator has to do, especially in large environments, is patching the backup/archive client software on protected systems. With this release, we’re adding the ability to automatically push out client software updates across AIX, HP-UX, Linux and Windows systems (Windows push support was actually introduced last year). This new capability should reduce the time needed to perform an update by at least 80%.Improved integration with VMware
Tivoli Storage Manager for Virtual Environments v6.3, our non-disruptive off-host solution for VMware virtualized servers, now supports VMware vSphere 5 and includes a plug-in for vCenter to easily manage TSM backup and restore operations from within the VMware environment. Tivoli Storage FlashCopy Manager v3.1 is also being released with VMware vStorage APIs for Data Protection integration and the vCenter plug-in to provide hardware-assisted application-aware snapshot management. Support for DB2, Oracle and SAP databases on HP-UX is also added in the new FlashCopy Manager release.Something BIG for mainframe customers
Tivoli Storage Manager for z/OS Media v6.3 is a new connector that enables customers to leverage their mainframe-attached FICON storage devices for storing TSM data. This offering won’t increase the licensing costs for existing customers that move their TSM v5.x Server software from z/OS and install TSM v6.3 Server on an AIX system or a Linux on z partition, and gives them access to all of the cost-saving improvements made in TSM over the past 3 years.The new standard in licensing simplicity
In June we announced the availability of the Tivoli Storage Manager Suite for Unified Recovery. This bundle of ten TSM and FastBack products is simply licensed by the amount of data being managed within the TSM environment, first copy only. We have seen outstanding results from this new offering, from both new and existing customers. The reason is simple: you want to use the right tool for each data protection job, but you don’t want to have to worry about acquiring and managing individual product licenses for each one. This is especially true in virtual server and cloud environments. Added benefit: our broad range of built-in data reduction technologies can dramatically reduce the costs of this offering.
Tivoli Storage Manager Suite for Unified Recovery v6.3 is also being announced, and includes all of the TSM and TSM for Virtual Environments enhancements noted above.What else?
Many other improvements are being introduced across the family, including: better reporting and monitoring; better scalability for Microsoft Windows, Exchange and SQL Server; faster internal processes such as database backup; etc. SAP customers using TSM for Enterprise Resource Planning v6.3 can now do incremental backups on Oracle RMAN.
For more information on the Tivoli Storage Manager enhancements, please refer to the announcement letter on ibm.com (link
To learn more about all the new IBM Storage announcements, please click here
(live 12 Oct.)
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
In continuation to my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray
, I would like to go ahead and call out some of the advanced capabilities that Tivoli Storage Productivity Center
Ron’s recent post on choosing the right storage hypervisor
points out to ‘comprehensive performance monitoring’ as one of the key capabilities you need to successfully deploy cloud storage. This thought reinforces the need for sophisticated tools that can help you significantly reduce the burden on storage configuring (think of best practices) and performance monitoring.
Bottle neck analysis
It’s no longer the network administrator – when the system response is poor, it’s the storage administrator who gets the call. Especially in a virtualized environment, it is essential to have performance monitoring tools that provide a quick yet comprehensive view of the data path – to ascertain any bottlenecks. With Tivoli Storage Productivity Center, you will be able to see where the bottlenecks have occurred, for example one storage subsystem may be over utilized while another is underutilized.
Data Path Explorer offers detailed view of all the storage entities and their connectivity. It provides you performance information across the entire data path – from host to array – and allows to drill down and view performance metrics at the port-level. Standard Edition
, the advanced module within Tivoli Storage Productivity Center, offers advanced reporting capabilities on bottleneck analysis.
According to a storage manager at a leading medical university, “With Tivoli Storage Productivity Center, I can quickly determine if there exists a bottleneck in the SAN infrastructure. Earlier it could take me days or sometimes weeks to figure that out. Now, I can do it in minutes”.
If you have recently deployed Tivoli Storage Productivity Center, make use of IBM’s Value Pack service offerings, which provide analysis of disk subsystem performance bottlenecks using native product tools. Talk to your IBM sales representative or IBM business partner for more information.
Configure your SAN the best practices way
Administrators are expected to ensure high availability for SANs. SAN configuration has traditionally been done manually. But as the complexity in managing the storage network grows, you need sophisticated tools to control and even optimize storage configurations. And adherence to best practices is essential for successful configuration and deployment of complex systems in your storage environment.
I touched upon the SAN Planner topic briefly in my previous post – and would like to delve little deeper in this one. As mentioned earlier, SAN Planner is a wizard-based tool that assists storage administrators in end-to-end planning involving all storage components and related networks. SAN Planner helps implement best practices pertaining to replication relationships; it utilizes current and historical performance metrics to recommend the best configuration while commissioning new storage systems.
There are three planners associated with recommending storage configuration changes, which are based on current workloads, capacity utilization and best practices:Volume Planner
– helps administrators in provisioning storage based on capacity, compression, RAID levels, etc. It includes replication planning as well, supporting sessions such as Metro Mirror, Global Mirror and FlashCopy.
– provides zoning and LUN masking configuration support.
– assists in planning and implementing storage provisioning for hosts and storage systems with mutilpath support in fabrics.
All the three planners can be invoked separately or together in an integrated manner from Tivoli Storage Productivity Center console. Learn more about these planners and their capabilities: download the latest Redbooks
As you can see, configuring SAN with Tivoli Storage Productivity Center is a child’s play, isn’t it? But can you check whether current SAN configuration conforms to best practices? Yes you can!
SAN Configuration Analyzer provides end-to-end check for configuration policies, ensures the correctness of storage network configurations, such as zoning, multipathing and replication. In addition, the tool sends alerts to administrators when the best practices are violated.
Storage networks are undergoing significant changes more often to accommodate changes in business policies and the ever growing data. Administrators are challenged to track configuration changes for problem determination, change management or auditing purposes. Tivoli Storage Productivity Center offers SAN Configuration History Viewer to provide a historic view of changes and eliminate time gap in determining problem areas associated with configuration changes.
To learn more about the IBM Tivoli Storage Productivity Center Select Series, contact your IBM sales representative or IBM Business Partner, or visit ibm.com
to join the virtual dialogue on Storage Hypervisor; share your thoughts and concerns in our group chat on October 7, 2011 from 12 noon to 1pm Eastern Time
. You can log in now for a preview of topics.
In response to: Enabling Private IT for Storage Cloud -- Part II (management controls)
To see a transcript of the live chat held on Friday, September 30th
about this topic visit this link:
And don't forget to listen to the 'open mic' conversation about
Storage Hypervisors with IBM's Ron Riffe, the author of this blog
series, and ESG analyst, Mark Peters:
This is part 3 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I
introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. Part II
explained how a storage service catalog, self-service provisioning, and usage-based chargeback can be used to drive down cost. In this 3rd post, I’m going to share some thoughts that should help you be smarter about choosing a storage hypervisor.
The first step is to remind ourselves what we’re trying to accomplish with a storage hypervisor. From our experience deploying over 7000 storage hypervisors, the starting point for most folks is improved efficiency and data mobility. Remember, the basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, people are looking for the freedom to move a slice (or virtual volume) from tier to tier, from vendor to vendor, and more recently, from site to site all while the applications are online and accessing the data.
To pull off this level of mobility – in servers or storage – it’s important that the hypervisor not be dependant on the underlying physical hardware for anything except capacity (compute capacity in the case of a server hypervisor like VMware, storage capacity in the case of a storage hypervisor). Think about it… Wouldn’t it be odd to have a pair of VMware ESX hosts in a cluster, one running on IBM hardware and one on HP hardware, and be told that you couldn’t vMotion a virtual machine between the two because some feature of your virtual machine would just stop working? If you tie a virtual machine to a specific piece of hardware in order to take advantage of the function in that hardware, it sort of defeats the whole point of mobility. The same thing applies to storage hypervisors. Virtual volumes that are dependant on a particular physical disk array for some function, say mirroring or snapshotting for example, aren’t really mobile from tier to tier or vendor to vendor any more.
But it’s more than just a philosophical issue, there’s real money at stake (you may want to read what comes next a couple of times). In Part II of this post I discussed using a storage service catalog as a means of defining specific service level needs for your different categories of data. These service levels covered the gamut from capacity efficiency and I/O performance (for you techies that’s RAID levels, thin provisioning, use of solid state disks, etc), to data access resilience and disaster protection (multi-pathing, snapshotting, mirroring…). The reason so many datacenters have an over abundance of tier-1 disk arrays on the floor is because, historically, if you wanted to take advantage of things like thin provisioning, application-integrated snapshot, robust mirroring for disaster recovery, high performance for database workloads, access to solid-state disk, etc… you had to buy tier-1 ‘array capacity’ to get access to these tier-1 ‘storage services’ (did you catch the subtle difference?) Now, I don’t have anything against tier-1 disk arrays (my company sells a really good one). In fact, they have a great reputation for availability (a lot of the bulk in these units are sophisticated, redundant electronics that keep the thing available all the time). But with a good storage hypervisor, tier-1 ‘storage services’ are no longer tied to tier-1 ‘array capacity’ because the service levels are provided by the hypervisor. Capacity…is capacity…and you can choose any kind you want. Many clients we work with are discovering the huge cost savings that can be realized by continuing to deliver tier-1 service (from the hypervisor), only doing it on lower-tier disk arrays. As I noted in Part II of this post, we’ve seen clients shift their mix of ‘array capacity’ from 70% tier-1 to 70% lower-tier arrays while continuing to deliver tier-1 ‘storage services’ to their data. This YouTube video
describes an example of that at Sprint.
Smart idea #1: Be careful to understand what, if any, dependency a storage hypervisor has on the capability of an underlying disk array to deliver function to your virtual volumes (like thin provisioning, compression, snapshotting, mirroring, etc.)
Next thought. There are three rather interrelated solution categories in the area of dealing with outages and protecting data.
- Disaster avoidance (“I know the hurricane is coming, let’s move the datacenter further inland”)
- Disaster recovery (“oh oh, the hurricane hit, and my datacenter is dead”)
- Data protection (“oops, I goofed up my data and I need to recover”)
IT managers we talk to have been successfully dealing with disaster recovery (for the techies, that’s array mirroring along with recovery automation tools like VMware Site Recovery Manager
(SRM), IBM PowerHA
, or others) and data protection (that’s array snapshotting along with specific connectors for databases, email systems etc as well as connectors to enterprise backup managers like Tivoli Storage Manager) for years. This third area of disaster avoidance has really gained steam because storage hypervisors now allow you to access the same data at two locations giving you the ability to do an inter-site application migration with things like VMware vMotion
, PowerVM Live Partition Mobility
(LPM), or others. When you are expecting a disaster, disaster avoidance let’s you transparently get out of the way. But it doesn’t magically keep all the other unexpected bad things from happening. You’ll still want to be prepared with disaster recovery and data protection. And if you are implementing a storage hypervisor, you shouldn’t be forced to choose.
Smart idea #2: Remembering smart idea #1, be sure that your storage hypervisor has its own ability to provide for disaster avoidance (inter-site mobility), disaster recovery (mirroring that’s integrated with recovery automation tools) and data protection (snapshotting that’s integrated with applications and backup tools).
One final thought. A storage hypervisor isn’t an island unto itself. Like a server hypervisor, it exists in a broader datacenter. Administrators need to be able to see it in the context of the disk arrays it manages, the servers (or virtual machines) that use its virtual volumes, the applications that need backups or clones, the disaster recovery automation that’s coordinating recovery of servers, storage, networks… You get the picture. When the challenges of day-to-day operations happen (and they do happen most every day)…
- …the storage network planner needs to look at the logical data path from a virtual machine to its physical server, through the SAN switch, to the storage hypervisor and finally to the physical disk array. He’ll need that storage hypervisor to be integrated with a SAN topology tool.
- …an application owner calls up with a performance issue that he’s blaming on ‘the storage’. You’ll need to be able to isolate performance across the whole data path (including the part of the path that goes through the storage hypervisor).
- …an application owner wants a consistent snapshot of his application to use as a backup copy (or a test clone). You’ll need a connector that talks to both the application and the storage hypervisor to identify the virtual volumes that need to be snapshotted, prepare the application for the snapshot, and then provide the application owner with an inventory of snapshots he can use to recover from.
- …you make the move toward cloud techniques in your private datacenter – implementing a storage service catalog, self-service provisioning, and usage-based chargeback. You’ll need a storage hypervisor that can be auto provisioned and that can provide the metrics on who is using how much storage.
Smart idea #3: Make a list of all the day-to-day operational things you do today with physical storage, and the things you hope to automate in the future, and be careful to understand if your storage hypervisor is sufficiently instrumented and integrated – or if it’s creating a new island to be separately managed.
And now a word from our sponsors :-) IBM offers the worlds most widely deployed storage hypervisor. With over 7000 deployments, hundreds in the newer inter-site disaster avoidance configuration, we’ve had a lot of opportunity to build some depth. As you evaluate using cloud storage techniques in your private enterprise, you’ll find things I talked about in this blog series available in IBM products today. They can help you save your company a pile of money (and make you look like a genius while you’re doing it).
Storage hypervisor platform: IBM System Storage SAN Volume Controller
(SVC)Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition
(TPC SE)Usage-based chargeback: Tivoli Usage and Accounting Manager
Thanks for staying with me through this blog series – hope you find it valuable!
The conversation continues!
While storage on cloud is a promising thought, the ensuing complexity associated with monitoring, managing and reporting the storage sprawl acts as a significant deterrent. Organizations need to equip their environment with robust storage resource management application that can withstand business demands, offer comprehensive visibility across the data path, and scale as their storage network expands. Fellow IBMer Ron Riffe
in his recent blog on Storage Hypervisor
wrote about the importance of management controls
and highlighted some of the key capabilities of IBM’s storage resource management offering - Tivoli Storage Productivity Center
Tivoli Storage Productivity Center supports both IBM and other-vendor storage devices that are compliant with SMI-S standards, and offers integrated storage infrastructure management capabilities that simplifies, automates, and optimizes the management of storage devices, storage networks, and capacity utilization of file systems and databases.
I am going to highlight the 4 key capabilities administrators would require should they pursue to put their storage on cloud…
Self service provisioning
Tivoli Storage Productivity Center enables automated discovery and wizard-based provisioning of storage systems, enabling administrators to effectively provision new storage through best-practice methods, often including disaster recovery planning while provisioning.
Tivoli Storage Productivity Center offers the SAN Planner that assists the administrator in end-to-end planning involving fabrics, hosts, storage controllers, storage pools, volumes, paths, ports, zones, zone sets, storage resource groups (SRGs), and replication The SAN Planner provides recommendations for creating and provisioning VDisks with the recommended I/O group and preferred node for each VDisk.
Combining Tivoli Storage Productivity Center with Tivoli Provisioning Manager, storage administrators have a powerful way to simplify the provisioning of storage. Automated work flows can be created that can utilize custom scripts and customer processes, including storage administrator and/or systems administrator sign off. The Tivoli Storage Productivity Center-Tivoli Provisioning Manager native integration enables storage administrators to allow application owners to procure and provision the storage space they need, the type of storage they need, the throughput that is expected and the price specifications to suit their business priorities.
To read more about SAN Planner, click here
Storage tiering reports (to be announced on Oct 14, 2011)
Storage tiering reports was developed by IBM Systems Lab Services under the larger theme known as STAR – Storage Tiering Activity Reporter, which provides decision support for data placement, and enables administrators to optimize resource utilization in terms of capacity and throughput.
Storage tiering reports help administrators to leverage storage virtualization and insights from Tivoli Storage Productivity Center to
• Identify data that could be moved to an alternate tier of storage or less active Managed Disk Group
• Identify the hottest and coolest Managed Disk Groups and Virtual Disks based on performance to assist in up tiering, down tiering and re-tiering decisions
• Provides capability to make “proactive” volume placement decisions
• Understand the performance stress on the hardware in comparison to its capability
A large European Telecommunications giant benefited from a 45% decline in storage acquisition cost in the first deployment translating into 55% discounted cash flow saving for a 4 year TCO evaluation.
Ensuring storage service levels
Continuous performance monitoring and reporting is key to business continuity and maintaining service levels in a cloud environment. Tivoli Storage Productivity Center provides end-to-end visibility to administrators from a single management console, including detailed performance metrics, data path and system connectivity, impact analysis and historical trending.
Administrators can ensure storage devices, storage networks and attached devices are performing in an optimized state, by setting different levels of thresholds for different storage entities based on the criticality of the asset. Alerts are generated when these thresholds are exceeded, duly notifying administers of potential impact and downtime. Tivoli Storage Productivity Center also offers policy-based event action that is based on performance values and business policies.
Tivoli Storage Productivity Center provides storage utilization insights from raw performance data against proven models to predict the utilization of components within the subsystem. This feature provides a unique “heat map” style of display that makes it easier for administrators to narrow in on storage “hot spots”, and thus more easily identify risks and discover both under- and over-utilized areas of the storage infrastructure.
To learn more about Storage Optimizer, click here
In a cloud environment, administrators are challenges to create and manage shared storage services that can be charged back to users based on consumption. When usage varies between departments or businesses, storage administrators require chargeback capabilities in order to simplify departmental allocation and manage capacity utilization.
Through integration with Tivoli Usage and Accounting Manager, Tivoli Storage Productivity Center for Data
enables storage administrators to understand storage usage and perform cost allocation or chargeback users of storage. These solutions help support storage administrators in their efforts to track, allocate and bill different departments and lines of business based on multiple usage metrics. As a result, the organization can better align the storage infrastructure with overall business objectives and be better prepared to meet future requirements.
Tivoli Storage Productivity Center for Data supports multi-tenancy requirements, allowing cloud storage administrators to manage and track storage usage for multiple clients simultaneously. Advanced multi-customer support capabilities enable organizations to charge in different currencies and to charge different rates for the same service, as well as providing cloud consumers with price breakdowns for resources used and resources reserved for future use. The solution supports large data centers as well as public and hybrid cloud environments.Click here
to download the white paper ‘Optimizing capacity and management of file systems and databases’.
The Storage Hypervisor
discussion is gaining momentum. Join the conversation
! The virtual dialogue on this topic will continue in a live group chat on September 30, 2011
from 12 noon to 1pm Eastern Time. You can log in now for a preview of topics.
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I
introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Enter the private storage cloud with its storage service catalog. In the consultative service engagements
we’ve done, we have found that most private enterprises have something like fifteen-ish distinct data types (things like database, e-mail, video, shared files, home directories, etc). A simple storage service catalog would describe the specific service levels needed by each of these data types. Let’s take “Database” and build out the scenario.
The first thing you’ll need is a place to create your catalog of storage services. IBM Tivoli Storage Productivity Center Standard Edition is a good option (man, what a mouth full – let’s just call it TPC SE for short… hmm, I’ll probably get fired for that :-) You’re going to use the wizard to create a new “Database” catalog entry.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Storage provisioning is self-service.
Most public storage services are targeted at end users like you and me who bring our credit card and provision some storage. Private storage clouds are a little different. Administrators we talk to aren’t generally ready to let all their application owners and departments have the freedom to provision new storage on their own without any control. In most cases, new capacity requests still need to stop off at the IT administration group. But once the request gets there, life for the IT administrator is sweet!
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use.
It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
- You negotiate a set of storage service levels (like “Database”) with your application owners and business units.
- You create the storage service catalog entry for “Database”
- Your end users request some new “Database” capacity be assigned to a particular server.
- You push the “Run now” button and the capacity is auto-provisioned.
- Your end user receives an invoice (complete with individual line items for each class of service in which they are consuming capacity).
- You’re in the cloud now!
Stay tuned for Part III
of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor. The conversation is building!
Earlier this week, fellow IBM blogger Tony Pearson joined the conversation with a perspective on Storage Hypervisor integration with VMware
. And IBM blogger Rich Vining added a perspective on VMware Data Protection with a Storage Hypervisor
. To cap it off, we just completed our first live group chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts. Join the conversation!
The virtual dialogue on this topic will continue in another live group chat on September 30, 2011 from 12 noon to 1pm Eastern Time.
I recently read an excellent post
by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Here’s a scenario we see…
- Administrators want to snapshot the VMware datastore 4 times a day. 4 days worth are maintained – 16 total snapshots “online”
- For longer term recovery, they promote one snapshot each day to a unified recovery manager. 1 month of these are maintained – 31 total snapshots “nearline”
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Storage hypervisor platform
: IBM System Storage SAN Volume Controller
(SVC)Storage hypervisor management, storage service catalog, and self-service provisioning
: Tivoli Storage Productivity Center Standard Edition
Join the conversation!
The virtual dialogue on this topic will continue in a live group chat
on September 23, 2011 from 12 noon to 1pm Eastern Time
. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Simplify Data Protection and Reduce Costs With Unified Recovery Management
On September 22, we will be hosting an educational webcast that will address the challenges of providing data protection and recovery for rapidly growing amounts of diverse enterprise data. During this call, you will hear about our unified recovery management solution that can help reduce complexity, risk and costs. Included in this solution is a new simple, value-based option for procuring and managing software licenses.
Speaker: Rich Vining, Product Marketing Manager
Date: September 22, 2011
Time: 11:00 AM Eastern US
Please register for this event using this link.
After registering you will receive a confirmation note with call-in instructions.
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
- Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere.
(as opposed to physical array-boundary limitations)
- Storage services are standardized – selected from a storage service catalog.
(as opposed to customized configuration)
- Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog.
(as opposed to manual component-level provisioning)
- Storage usage is paid per use – end users are aware of the impact of their consumption and service level choices.
(as opposed to paid from a central IT budget)
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
In Part I of this post:
I’ll explain the value of virtualizing storage resources. Hint: you’ve likely already done it to your server resources with some sort of server hypervisor like VMware vSphere
, or IBM PowerVM
, or Microsoft Hyper-V
… so now let’s look at what you get from doing it to your storage resources with a storage hypervisor
In Part II of this post:
I’m going to explain how public storage clouds use management controls like service catalogs, self-service provisioning, and pay-per-use to drive down their costs. I’ll also try to offer some practical ideas for using these techniques in a private enterprise setting to gain similar efficiencies.
In Part III of this post:
I’m going to explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
Ready to jump in?
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
In August, Gartner published a paper
that observed “Heterogeneous storage virtualization devices can consolidate a diverse storage infrastructure around a common access, management and provisioning point, and offer a bridge from traditional storage infrastructures to a private cloud storage environment” (there’s that “cloud” language). So, if I’m going to use a storage hypervisor as a first step toward cloud enabling my private storage environment, what differences should I expect? (good question, we get that one all the time!)
Perhaps the most obvious expectations are improved efficiency and data mobility. The basic idea behind hypervisors (server or storage) is that they allow you to gather up physical resources into a pool, and then consume virtual slices of that pool until it’s all gone (this is how you get the really high utilization). The kicker comes from being able to non-disruptively move those slices around. In the case of a storage hypervisor, you can move a slice (or virtual volume) from tier to tier, from vendor to vendor, and now, from site to site all while the applications are online and accessing the data. This opens up all kinds of use cases that have been described as “cloud”. One of the coolest is inter-site application migration. Just recently, a hurricane hit the eastern cost of the United States. If your datacenter had been in the projected path of that hurricane and if you had implemented both a server hypervisor (let’s say VMware vSphere for your Intel servers and IBM PowerVM for your Power systems), and a storage hypervisor platform (let’s say IBM SVC), then here’s what you might have said: “Hey, the hurricane is coming, let’s move operations to another datacenter further inland…” IBM SVC Stretched-cluster allows you to access the same data at both locations giving you the ability to do an inter-site VMware vMotion
and PowerVM Live Partition Mobility
(LPM) move – non-disruptively. As far as the end users are concerned, their applications are running in a private cloud. For you… you avoided a disaster and got to sleep well that weekend.
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
- Thin provisioning: You have a client that asks for 500GB of new capacity. You’re going to give it to him as thin provisioned virtual capacity which is a fancy way of saying you’re not going to actually back it with real physical storage until he writes real data on it. That helps you keep cost down.
- Compression: Same guy also asks to keep several snapshot copies of his data for recovery purposes. You’re going to start by giving him thin provisioned capacity for those snapshots, but you’re also going to compress whatever data those snapshots produce – again adding to your efficiency.
- Agnostic about vendors: Because you’re providing virtual storage resources from a storage service catalog (we’ll talk more about that in Part II of this post), you have the freedom to shift the physical storage you operate from all tier-1 to a more efficient mix of lower tiers, and while you’re doing it you can create a little competition among as many disk array vendors as you like to get the best price / support.
- Smart about tiers: If you shut your eyes real tight and think about the concept of a “virtual” disk that is mobile across arrays and tiers, you’ll quickly start asking questions about having the storage hypervisor watch for I/O patterns on blocks within that virtual disk that would benefit from higher tier capacity, like solid-state (SSD) or flash disk for example. A good storage hypervisor will automate the detection of such patterns and move hot data blocks to these highest tiers of storage if you have them.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Stay tuned for Part II
of this post…
Join the conversation!
The virtual dialogue on this topic will continue in a live group chat
on September 23, 2011 from 12 noon to 1pm Eastern Time
. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
The Global Tivoli User
Community is hosting an all-day, Storage Focused Tivoli User Group event in
New York City
October 11th that will cover:
New Approaches and
Best Practices for Data Lifecycle Management
Date: Tuesday, October
11, 2011 - 8:30 am – 4:30 pm
Location:IBM Office, 590 Madison Avenue, NYC - Briefing Center auditorium 3rd Floor
At this event you
will have the opportunity to learn from experts about and discuss the
- Current developments and future directions for
Tivoli Storage solutions from Steve Wojtowecz, IBM
VP of Tivoli Development
- Best Practices from TSM
customer deployments by event sponsors and IBM
Business Partners Mainline Information Systems and Starfire Technologies
Unified Recovery Management and Capacity Pricing Discussion
- Storage Hypervisor Solution / Storage in the
Continental breakfast and lunch will be served
We hope you will join us to learn about these key storage
topics, as well as hear about real-live customer success stories from our IBM Business Partners.
Space is limited and you must be a TUC
member to attend. Register today to
secure your spot at this special event!
Not a part of the Tivoli User
Community Yet? Become a member.
IBM Champion program is still accepting nominations for experts on IBM Tivoli Software- Nominations are open through August 19.
The IBM Champion program recognizes exceptional contributors to the technical community -- clients and partners who work alongside IBM to build solutions for a smarter planet. An IBM Champion is an individual who leads and mentors his or her peers and motivates them toward IBM solutions and services. Champions can be found running user groups, managing websites, speaking at conferences, answering questions in online forums, writing blogs, submitting wiki articles, sharing how-to videos, and writing technical books.
The IBM Champion program recognizes and thanks these innovative thought leaders, amplifying their voice and increasing their sphere of influence in the technical community. IBMers, partners and clients are encouraged to submit nominations through August 13th. To learn more and to submit your nominations, go to: IBM Champion homepage.
IT managers are broadly exploiting virtual server infrastructures -- hypervisors -- to improve efficiency, provide for transparent mobility, and give common manageability and capabilities regardless of type of server hardware being used. These same robust benefits are now available for virtual storage infrastructures with the IBM storage hypervisor (System Storage SAN Volume Controller and its management console the Tivoli Storage Productivity Center).
Listen to the webcast
to understand how the IBM storage hypervisor can be a complimentary next step in the overall IT environment virtualization process.
Click and learn more about IBM System Storage SAN Volume Controller
and IBM Tivoli Storage Productivity Center
. Please reach out to IBM sales or IBM business partners to understand how IBM storage hypervisor solution can benefit your organization's effort to virtualize and efficiently manage storage.
Is your archived information earning its keep? Are explosive data growth, regulatory compliance, legal discovery, and data protection on your mind? Drivers like these demand long-term, high-volume data retention. Join IBM and IDC on June 8, 2011 at 11AM EST for an informative webinar on how to "Get more from your archived information."
Laura DuBois, IDC Program Vice President, Storage Software, and Craig Butler, IBM Business Line Executive for Storage Archive, Data Protection & Retention, will address a smarter approach to archiving. Find out how companies like yours use the IBM Smart Archive strategy
and lead offerings to help ensure that relevant information is properly retained and protected throughout its life cycle. As part of the Smart Archive strategy, IBM offers IBM Information Archive: a streamlined, flexible archiving solution that helps organizations of practically all sizes address their information retention needs - whether business, legal or regulatory.
On May 31 2011, IBM announced the availability of Tivoli Storage Manager Suite for Unified Recovery
, a bundle of ten Tivoli Storage Manager and Tivoli Storage Manager FastBack products, with an easy to order, deploy and manage capacity-based pricing model.
With this offering, you can deploy any of ten different solution components, in any location and quantity to meet the unique data protection and recovery needs across a wide range of systems, applications and service level requirements. The pricing is based on a tiered Terabyte metric that measures the amount of data managed in Tivoli Storage Manager primary storage pools and FastBack repositories. License costs can be dramatically reduced through the use of built-in source and target data deduplication, and there is no charge for duplicate copies of the data that may be used for disaster recovery, testing, data mining and other purposes.
The individual products included in this comprehensive package are:Tivoli Storage Manager Extended Edition
: Provides core backup/restore for a wide range of operating systems; broad support for tiers of storage; NDMP, IBM DB2® and Informix® support; source and target deduplication; advanced disaster recovery planning; and much more.Tivoli Storage Manager for Databases
: Performs online, consistent and centralized backups for Oracle and SQL to avoid downtime; protect vital enterprise data infrastructure and minimize operation costs.Tivoli Storage Manager for Enterprise Resource Planning
: Performs online, consistent and centralized backups for SAP environments.Tivoli Storage Manager for Mail
: Protects data on email servers running Lotus® Domino® or Microsoft® Exchange, with granular restore of Exchange email objects.Tivoli Storage Manager for Virtual Environments
: Automatically discovers and protects VMware virtual machines; offloads backup workloads to a centralized server and enables flexible, near-instant recovery.Tivoli Storage Manager for Space Management
: Moves inactive data to reclaim online disk space for important active data; frees administrators from manual file system pruning tasks; and defers the need to purchase additional disk storage.Tivoli Storage Manager for Storage Area Networks
: Provides high-performance backup/restore by removing data transfer from the LAN.Tivoli Storage Manager FastBack
: Provides efficient block-level incremental backup and near-instant restore for critical Microsoft Windows® and Linux® servers and applications, in the data center and in remote offices.Tivoli Storage Manager FastBack for Microsoft Exchange
: Provides the ability to recover individual Microsoft Exchange objects such as email, attachments, calendar entries, contacts and tasks.
Existing customers of Tivoli Storage Manager are also able to take advantage of this new pricing model, which will eliminate the need to count Processor Value Units and help to gain better visibility and control of future licensing costs. For more information on converting to this model, please contact your IBM Sales Rep or Business Partner.
Later this week, I will post a list of the advanced capabilities that Tivoli Storage Manager Suite for Unified Recovery can bring to your IT environment with the overall benefits of reducing data growth, improving operational efficiency and dramatically reducing costs. "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Tivoli will be performing a live demonstration of Tivoli Storage Manager FastBack
and Tivoli Storage Manager FastBack for Exchange
data protection products. These welcome additions to the Tivoli Storage Manager product family provide the ability to meet aggressive Recovery Point and Recovery Time Objectives in an organization's data protection service.
The TSM FastBack family provides many advanced features including:
- Instant Restore allows users to access to their data or application immediately, while the restore is taking place.
- Incremental Forever Backups prevents wasting time and money in performing and storing unnecessary full backups. Each backup appears to be a Full backup, but only the blocks that have been modified are copied.
- FastBack Mount allows access to backed up data without it being recovered. This allows data to be validated after backups, the correct data to be identified before it is recovered, or data to be opened and its contents to be recovered at a more granular level, thus reducing the size and time of the recovery.
- Exchange Brick-level Recovery allows messages (as well as Contacts, Calendar Items, etc.) to be recovered from a previous backup without requiring an entire Exchange Database to be recovered. TSM FastBack for Exchange does not require additional backup processing to provide Individual Mail Recovery.
- Branch Office Disaster Recovery allows replication of branch office backup data to a central site. This data can be compressed and encrypted during the transfer. The replicated data at the central site can be used as the source for creating a tape copy of the data or for recovering branch office data and hosts. TSM FastBack allows the backups and replication of multiple branch offices to be monitored with a single tool.
- TSM FastBack Bare Machine Recovery allows Windows hosts to be quickly recovered, even to dissimilar hardware.
This demonstration is open to Customers, Business Partners and IBM employees.
There are Web Conference and Audio Conference components to this demonstration.
Conference ID is 9663533
Prior to the web conference, we suggest you do the following:
1) go to www.sametimeunyte.com
2) click on Support
3) click on Lotus Sametime Unyte Meeting System Check
4) Select attendee type and click Next
5) Proceed with the system check and install any plug-ins required.
Toll Free: 888-426-6840
The scheduled dates are:
A video that showcases the ease of installing Tivoli Storage Productivity Center suite on to a Windows server. The video, created by Mike Griese
, captures some key developments as part of version 4.2:
- No need for Common Agent Manager
- Fewer values to enter on setup screens
- Faster overall installation time
The video takes you through a step-by-step, wizard-based installation of Tivoli Storage Productivity Center, Tivoli Integrated Portal and Tivoli Storage Productivity Center for Replication.
Tivoli announces the availability of two white papers for Tivoli Storage Productivity Center.
Managing Storage Area Network (SAN) configuration
: As organizations strive to better manage their business infrastructure, leaders in operations management and system administration have been charged to minimize service disruptions by better understanding the impact of changes on the infrastructure; integrate and streamline processes across silos to reduce complexity; and implement “green” IT. Storage infrastructure management can be a key element in addressing these issues successfully. Download here
Storage management solution: Deriving substantial benefits from Tivoli Storage Productivity Center
: Storage administrators are increasingly turning to more sophisticated management tools to help them overcome these challenges. Storage Resource Management tools are focused on helping the storage administrators deal with configuring these modern IT storage networks. These tools provide a comprehensive view of the end-to-end storage infrastructure, from the hosts, through the fabric network to the storage arrays. Download here
Attention all Tivoli User Community Members! There will be free online training offered on May 26, 2011 from 9:30AM - 1:30PM EST on the topic of: Tivoli Storage Manager for Virtual Environments.
The following topics will be covered:
-Explain the different types of virtual machine backups (lecture)
-Explain the benefits of Tivoli Storage Manager for Virtual Environments (lecture)
-Perform full VM-level backups and restores (demo)
-Perform file-level backups and restores (demo)
-Install and configure Tivoli Storage Manager for Virtual Environments (demo)
-Perform Block-Level Backups with CBT (demo)
Topics were selected from the IBM Tivoli Storage Manager 6.2 for Virtual Environments workshop and tailored for a one-day online presentation. REGISTER HERE!
Join us on May 17th at 1: p.m. Eastern / 10:00 a.m. Pacific.
.......to hear all about Smarter Storage & Data Management for Virtual Server EnvironmentsFeatured Speakers:
Richard Vining, IBM Tivoli Storage Product Marketing Manager & John Connor, IBM Tivoli Storage Product Manager
There is a huge transformation happening in IT organizations! These organizations are migrating from single-purpose physical servers to consolidated virtual machine technologies. The benefits of virtualization are many: cutting acquisition, management and facilities costs by reducing the number of physical machines; increasing service levels through faster server provisioning; and enabling new delivery models such as cloud services. However, virtualizing servers does not reduce the amount of data that is created and stored; actually it can have the opposite effect as virtual machines are moved or de-provisioned. This presentation will describe smarter ways of managing all this data and the infrastructure that stores it, and feature IBM Tivoli Storage Productivity Center family of products.
Join us on May 12th at 2:00 p.m. Eastern / 11:00 a.m. Pacific.............to hear all about Tivoli Storage Manager (TSM) for Virtual Environments.Featured Speakers:
Greg Van Hise - IBM Storage Architect & Richard Vining - IBM Product Marketing Manager for TSM
There is no question that server virtualization has been a boost to the businesses that have embraced it, but it is also causing huge headaches for storage administrators. Join IBM industry leaders for this live, interactive event, as they introduce the newest addition to the Tivoli Storage Manager family that was built to provide advanced data protection and fast, flexible recovery of your VMware environments. This online-only event allows you to hear from experts as they review TSM for Virtual Environments and demonstrate how it can help you reduce costs while meeting service level requirements. This event will include a 20 minute presentation, followed by a 20 minute live demo of the actual product.
Last week at the 2011 CRN Xchange
event in Orlando, the results of the 21st Channel Champions awards were announced. IBM Software was honored with a Gold Award in 2 categories, including Backup and Recovery Software for the IBM Tivoli Storage Manager
family. The winners will be featured in the April issue of CRN - both in print and online.
Everything Channel's Channel Champion survey helps solution providers select from the myriad vendors out there, the cream of the crop. The Channel Champs program measures the satisfaction of solution providers that currently sell a particular vendor's products and/or services--regardless of whether or not they are formally in a manufacturer's channel program. It is the largest business partner satisfaction study conducted.
Five categories relevant to IBM Software were in the 2011 Channel Champions Survey: Data and Information Management , Middleware, Business Intelligence, Collaboration, and Backup and Recovery software.
The survey results were dominated by IBM and Microsoft.
In 2011 we came out as clear winners by sweeping the Middleware and the Backup and Recovery categories. Most notable is the Backup and Recovery 'win' for Tivoli, where IBM gained a clean sweep victory. It is worth noting that in 2009 Tivoli was a distant third in this category, confirming that a great deal of progress has been made in both product capabilities and channel programs.
IBM placed second in the other 3 categories, losing by a narrow margin to Microsoft in Data and Information Management and by wider margins in Business Intelligence and Collaboration.
Congratulations to the IBM Channels Sales and Marketing Teams for achieving this great honor, and THANK YOU to our fantastic Business Partners for all you do to support our solutions and our mutual customers.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
The Tivoli Storage Management team is beginning a new series of educational webcasts for the end user community on a range of topics. The first session will focus on data protection and recovery of virtual server environments -- and you're invited!
Server virtualization has been a boon to the businesses that have embraced it, but it is causing huge headaches for storage administrators. One area of particular concern is backup and recovery, especially as the use of virtual servers grows in production environments.
Are all of the Virtual Machines (VMs) covered with appropriate backup policies? Are you able to manage the sprawl of backup agents as the number of VMs increases? Are you having trouble meeting backup windows and recovery SLAs on your VMs?
Join IBM as we introduce you to the newest addition to the TSM family that is built to provide advanced data protection and fast, flexible recovery of your VMware environments. Join us as we review TSM for Virtual Environments and how it can help you reduce costs while meeting service level requirements. We will have a 20 minute presentation and a 20 minute live demo of the product.
Date: April 14, 2011
Time: 2:00 PM Eastern
Speakers: Greg Van Hise, TSM Software Architect; Rich Vining, Product Marketing Manager
Please register for this event using the On-Line Registration
I attended IDC’s Directions2011
event yesterday in Boston. It was kicked off by Frank Gens, IDC Senior Vice President and Chief Analyst making a compelling case that the IT industry is at a historic inflection point, entering the third platform for growth.
The first platform of course was the mainframe for computing and terminals for users. The second platform was formed between 1981 and 1986 with the introduction of the personal computer, client/server computing, and local area networks. The number of IT devices and users, and the amount of money spent on IT, grew exponentially over the first platform.
Now, 25 years later, we’re poised for the next platform. It will be built on mobile devices and mobile applications, cloud services, big data with advanced analytics, and social business. To highlight this shift, IDC believes that mobile, app-capable device shipments will exceed PC shipments this year for the first time, at around 400 million units each, and of course they won’t meet again.
Frank made an interesting observation – that most of the innovative young engineers are being drawn to the growing mobile and social platforms, basically “sucking the oxygen out of PC development”. He also noted that there are at most 75,000 applications for PCs but already there are 1.3 million apps for mobile devices.
Cloud services are the back-end delivery model for most mobile applications, and IDC predicts that 80% of all new enterprise applications will be delivered in a cloud model, at least as an option. “Big Data” is a term IDC uses to describe the types of solutions that IBM talks about in its Smarter Planet ads – smart electrical grids, individualized healthcare, optimized traffic, smarter buildings, etc. And “Social Business” reflects the fact that customers have taken control of the buying cycle, and vendors need to shift from push marketing to social marketing – something that is already happening. In a recent IDC study on IT marketing spending, the slice of the marketing budget for social media increased from 13% to 19% in just one year.
The real message behind Frank’s presentation, though, was to ask which IT companies will be the Wang’s, Digital’s and Cullinet’s – the casualties of the last platform shift – as the world moves to the third platform for IT growth. History suggests that those that remain rooted in the PC client/server model are doomed. Those that are embracing the future of IT are poised to experience unprecedented opportunities for growth and profit.
IDC is holding their west coast Directions2011 conference on March 15th in San Jose. If you’re an IDC client and can make it to the event, I highly recommend you do.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Storwize Rapid Application Storage solution, launched in Feb 2011, brings together innovative storage technology, comprehensive management software and implementation services that help you manage business applications and storage growth efficiently. In continuation to my earlier post 'Tivoli Storage Productivity Center supports Storwize V7000
', my colleague Ian Wright has created a brilliant video showcasing the performance monitoring, alerting and reporting capabilities of Tivoli Storage Productivity Center as part of the Storwize Rapid Application Storage solution.
In specific, Ian walks us through creating performance reports at sub-system level to understand the data surge source; creating batch reports that enable better understanding on the storage capacities, including tiering information; and also, enabling thresholds that will eventually create alerts when these thresholds are breached.
For more information on Storwize Rapid Application Storage, click here
. To learn more about Tivoli Storage Productivity for Disk Midrange Edition, click here
Please reach out to IBM Sales Specialists or IBM Business Partner to understand how Storwize Rapid Application Storage solution can benefit your organizations’ efforts towards efficiently managing the data explosion.
Note: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.