I am often asked... "When can I use FlashCopy Manager with my EMC disk array?" (substitute "EMC" with your favorite vendor)
With FlashCopy Manager for Windows, you can leverage hardware snapshots for any disk array that has a VSS Hardware Provider. This is because Windows has a built-in architecture (referred to as "VSS") that enables pluggable snapshot support. We wrote a developerWorks article that explains how this works and how it integrates with TSM a few years ago. (Note: This article refers to "TSM for Copy Services" instead of "FlashCopy Manager" because it was written before the product name was changed.)
But, with FlashCopy Manager for UNIX and Linux and FlashCopy Manager for VMware, you must wait until support is added for your desired disk array. Last year, IBM partnered with Rocket Software to develop a device adapter pack that plugs in to FlashCopy Manager for UNIX and Linux and FlashCopy Manager for VMware to extend support to more storage devices. You install it on top of an existing FlashCopy Manager (version 4.1 or later) installation on the application server being protected by FlashCopy Manager for UNIX (or on the proxy backup server in case of FlashCopy Manager for VMware) and configure it to talk to the storage device. After that, you are able to leverage the power of FlashCopy Manager snapshot protection for the hardware device supported by that device adapter pack!
At the end of last year, Rocket Software released support for EMC Symmetric (VMAX and DMX). They are planning to add more disk arrays in 2014. If you have devices that you want to see added, contact Rocket Software.
Have a great day!
IBM recently published reference architecture blueprints to simplify Tivoli Storage Manager (TSM) sizing and deployment, “Blueprint and Server Automated Configuration for Linux x86". Feedback from IBM Business Partners has been positive. In December 2013, Version 1.2 was released for free download.
What is a Reference Architecture Blueprint?
A reference architecture blueprint is a detailed hardware specification designed to manage a specific workload. In this case, the blueprints specify hardware requirements for small, medium and large TSM workloads, using Best Practices for data protection.
Why Use Reference Architecture Blueprints Instead of Sizing?
Traditional backup server sizing is extremely flexible and recommended for unique workloads or where precision is critical. Most of the time, however, a reference architecture blueprint can reduce time and complexity, and improve data protection through standardization and automation.
IBM reference architecture blueprints for TSM have companion configuration scripts that set up TSM for the requested workload quickly and consistently.
I supported TSM for 5 years as an IT Specialist Manager, and have seen many clients struggle with traditional backup server sizing. Even if you can capture details about the current backup workload, which is often impossible, you have to estimate the data growth rate. Sizing errors can result in unhappy application owners or an inability to meet recovery objectives. In contrast, a reference architecture blueprint takes you to a known destination quickly and safely.
Recently, TSM experts had a TSM sizing discussion on ADSM.org. Baltimore-Washington TSM User Group Member Wanda Prather, Stefan Folkerts, and Sergio Fuentes discussed sizing variables: Workload, hardware, storage features. As Wanda says, sizing is, “both an ‘it depends’ and a ‘what not to do’ answer.” I recommend reading her post if you’re interested in backup server sizing.
Reference Architecture Blueprint Details
These reference architectures are based on x86 hardware running Linux and are optimized as disk only storage using TSM data deduplication technology. They have been tested to determine the optimal workloads and limits for each size: Small, Medium or Large. The blueprint includes a document, or “Cookbook”, that describes the three reference architectures in detail, including IBM hardware model numbers and configuration requirements.
IBM also includes two scripts to speed up the installation and configuration, increasing time-to-value:
The first script does a configuration check that will verify the hardware configuration meets the blueprint specifications, validate Linux kernel settings, and verify the configuration of required file systems prior to running the standard TSM Server installation. The script also configures the TSM Server using best practices and: creates a DB2® instance, defines deduplication storage pools with optimal performance settings, defines administrative maintenance tasks optimized for de-duplication scalability, defines TSM database backup to disk, creates a dsmserv.opt file with best practice option overrides, creates policy domains for database, mail and file servers with management classes for 30-, 60- and 120-day retention, and defines backup schedules for all client types that can be easily selected when deploying the desired client workloads.
The second script runs simulated TSM database and storage pool workloads and provides performance measurements that can be used to compare as a reference against those measured on the blueprint configuration.
Please share your experiences with TSM reference architecture blueprints and backup server sizing. What works for you?
Mike Barton is a Worldwide Storage Marketing Manager for IBM.
The opinions expressed herein are sorely mine.
IBM is looking for customers and business partners to participate in the Beta and Early Access Programs for IBM SmartCloud Virtual Storage Center.
Both the Beta and Early Access Programs (EAP) will start later this month and provide participants with an opportunity to influence the design of future Virtual Storage Center enhancements by providing feedback directly to development.
Virtual Storage Center includes all Tivoli Storage Productivity Center functions plus Advanced Analytics such as Storage Tiering Automation, VMware Integration (Clustering support and vSphere Plug-in), Self Service Provisioning and more.
Tivoli Storage Productivity Center and Virtual Storage Center customers are all invited to participate in these programs. As topics are introduced the facilitator will define the boundaries between what is included in Virtual Storage Center versus Tivoli Storage Productivity Center.
To request enrollment in the Beta program, submit the VSC/TPC Beta Signup form.
To request enrollment in the EAP program, submit the VSC/TPC EAP Signup form.
For any questions, contact Mary Anne Filosa: firstname.lastname@example.org
For information, see the Downloads page.
IBM is looking for candidates to participate in a beta program for an upcoming release of Tivoli Storage Manager.
The beta program is planned to start in late January 2014 including worldwide participation to obtain customer and business partner feedback on the release.
If you are interested in enrolling in this beta program, please submit a sign-up form here:
For any questions, please contact Mary Anne Filosa, email@example.com.
IBM is looking for candidates to participate in an Early Access Program for an upcoming release of Tivoli Storage Manager. An Early Access Program (EAP) consists of web conferences with worldwide participation to obtain customer and business partner feedback on the release's design.
If you are interested in enrolling in this EAP program, please submit a sign-up form here:
For any questions, please contact Mary Anne Filosa, firstname.lastname@example.org
Did you happen to see IBM's announcement on October 29th for the new Tivoli Storage Manager 7.1 and FlashCopy Manager 4.1? If not, go take a look. There are a lot of really cool new features and functions that you need to see. It will available for electronic download on December 13, 2013.
TSM and FCM continues to invest in the Operations Center, virtualization support, application integration, security, centralized management, deduplication, resiliency, useability, ease of use/install, and more. Go read the announcement letter and you will see what I mean.
If you want to see some of this in action, contact your IBM rep.
Backup redesign continues to be toward the top of most analysts’ lists for 2013 IT priorities. I’ve talked a lot about some of the catalysts behind this trend like data growth, big data, VMware and software defined storage. With IT managers redesigning, the incumbent enterprise backup vendors have a lot of motivation to offer innovative solutions that are a bit ahead of the times. The leaders have all placed strategic bets on what the winning formula will be. I discussed these bets in my post “Forrester’s take on enterprise backup and recovery.”
For its part, IBM is being quick about helping IT managers redesign. The help starts with a clear understanding of the economic benefit a redesign can bring. After all, in today’s environment few IT managers make technology moves simply for the sake of technology. Storage is about economics. I discuss this more fully in my post “Does trying to find a better economic approach to storage give you ‘Butterflies’?” But there is still efficient technology that enables these economic savings, and the person in IBM who is ultimately responsible for the technology in IBM Tivoli Storage Manager (TSM) is the product manager, Dr. Xin Wang.
Recently I spoke with Xin about the important shifts IT managers are facing and how she is helping IT managers reimagine backup.
The Line*: Xin, I’m going to start with the “Dr.” part of your title. Should folks call you the Backup Doctor?
Xin: (laughing) Well, I don’t know about that. I’m actually a doctor of Applied Physics. One thing that drove me to earn a PhD and has moved me ever since is that I love to learn. I started my career in IBM hard disk drive research, spent some time as a storage software developer and development manager, and have now been working with backup clients as a product manager for several years.
The Line: Wow, I could probably do an entire post just on your career. But let’s stay focused. What have you learned about the challenges IT managers are facing and this whole backup redesign movement?
Xin: It’s interesting. The challenges aren’t secret but they carry big implications for backup. Data is growing like crazy; that’s no secret. But it is now so big that the old method of loading an agent on a server to collect and copy backup data over a network to a tape isn’t keeping up. So IT managers are redesigning.
And what about servers? Servers aren’t servers anymore. Thanks to VMware, they are virtual machines that come, go and move around in a hurry. Traditional backup is too rigid. So IT managers are redesigning.
Administrators are changing too. The generation of backup admins who grew up tuning the environment is giving way to a new generation of backup, VMware and cloud admins who need much more intuitive and automated management tools. And so IT managers are redesigning. (Editorial comment: I discussed the change in administration in my post “Do IT managers really ‘manage’ storage anymore?”)
The Line: Okay, I think I’m seeing your trend. IT managers are redesigning. And it seems like you’ve got a clear idea of why. Can we take your list one at a time? I think my readers would be interested in what you are doing with TSM in each of these areas.
Xin: Sure, that makes sense.
Check back for part 2 of the interview in which Xin shares her near term plans for TSM. If you have questions for Xin, please join the conversation by leaving a comment below.
*The Line is my personal blog, and when it appears in the interview, it represents me as the interviewer.
I recently participated in a Vendor Day Summit for a large prospective client and I had the opportunity to talk with them about their storage management needs. Over the past 2 years, they have seen over 200% growth in the total capacity of their storage infrastructure and near-term projections are expected to continue this trend. Storage growth has also led to performance concerns with tier 1 applications due to the increased demand on the infrastructure. So, we came to the summit to talk to the client about IBM SmartCloud Virtual Storage Center (VSC), IBM’s flagship storage management solution that combines our market leading technologies from IBM SAN Volume Controller, IBM Tivoli Storage Productivity Center, and IBM FlashCopy Manager along with intelligent analytics and automated management to reduce administrative complexity. It helps improve capacity utilization and slow storage growth, offer flexibility in the placement of workloads to balance performance demand, extend the life of existing storage assets, and reduce the overall cost associated with managing their infrastructure.
During the course of the conversation, it became apparent that the storage administration staff is significantly overburdened by daily tasks like managing provisioning requests, handling fire drills, and troubleshooting end user support calls. VSC is uniquely capable of improving storage administrator efficiency through a variety of key capabilities and we are continuing to enhance those capabilities to address our clients’ most challenging problems.
IBM’s Smarter Storage vision is aimed at providing global data availability across physical, virtual, and cloud environments, to provide unprecedented levels of cost efficiency and simplicity through innovation and applied analytics, and unrivalled client experience and success. VSC has been built from the ground up to embody that vision and provide our clients with a unique storage management experience.
When we first introduced VSC in October, 2012, it included a new user experience that provides intuitive access to various tasks and views within the console to improve administration of complex infrastructures. Many of the legacy views from our market leading storage resource management product, Tivoli Storage Productivity Center, were updated or completely redesigned, based primarily on direct customer feedback. During the development phase, we run an extensive Early Access and Beta Program to enable clients to test our new features and provide direct feedback to our developers, helping to provide that unrivalled client experience and success. However, we knew when we released that first version that there was still work to do. I am pleased to say that in our plans for an upcoming release of VSC, we have made significant strides to further improve the user experience by providing more intuitive troubleshooting tools, especially in the area of performance management, as well as a completely new interface for managing service classes and storage pools. This capability will be critical for clients that are interested in transforming traditional storage environments into a ‘cloud optimized’ infrastructure that standardizes provisioning services and reduces provisioning time from days and weeks to minutes and hours.
Going back to the Vendor Day Summit, this client was also in need of help managing the growth of their environment, a multi-petabyte heterogeneous infrastructure with multiple classes of storage to service their users. One of VSC’s key features at our original launch was the ability to provide intelligent analytics on top of a virtualized storage environment. Leveraging the market leading capabilities of IBM SAN Volume Controller, administrators can create storage pools that include all of their existing storage assets, organized by performance, drive type, RAID level, etc. to consolidate the management of a heterogeneous environment. Workloads can then be quickly provisioned out of these storage pools based on service level requirements to satisfy end user requests. VSC can help the administrator select the appropriate pool of storage for workload placement based on the user defined characteristics of the workload using built-in intelligence about not only the new workload, but also its impact on existing workloads within the storage pools.
Once the workload has been provisioned, VSC collects performance and utilization data, which can be used to build a comprehensive view across the entire infrastructure. The Analytics Engine within Virtual Storage Center can then identify workloads that are not placed on the appropriate tier or class of storage based on their usage patterns. The Analytics Engine will also make recommendations to migrate volumes to a more appropriate tier of storage. This intelligent analytics capability often results in significant levels of cost efficiency for clients, as they are able to migrate many of their workloads off of the most expensive storage.
This capability was especially interesting to our prospective clients at the Vendor Day Summit and our planned upcoming enhancements to automate the migration of those volumes based on the recommendations for the Analytics Engine was viewed as a major advantage. Storage administrators will still have the ability to approve migrations before they are executed, but tying the recommendations for migration with the actual execution tasks is another way IBM is helping clients improve the efficiency of their data center operations.
Stay tuned to learn more about how IBM SmartCloud Virtual Storage Center is helping clients improve storage utilization, availability, performance, and service levels of heterogeneous storage environments.
With a new school year underway, vacation season for many come and gone and the Labor Day long-weekend upon us in North America, entering September marks the unofficial end of summer. For many this is a somewhat depressing time of year as we realize that colder temperatures and the on-set of winter aren’t far off.
However it’s not all bad news. Some of us prefer outdoor activities in the fall and winter months and when it comes to business, the fall presents a renewed interest in sharpening our skills and seeking networking opportunities by attending industry conferences and events.
For Storage professionals in North America an ideal opportunity comes in the form of Storage Decisions New York on September 16 & 17. Storage Decisions New York plans to bring together over 500 end-users, independent experts, analysts, and top solution providers to engage in thought-provoking presentations, interactive networking opportunities, and sponsor showcases featuring the latest trends and technologies impacting the storage industry. The 2-day conference, scheduled for is the only place you will find the industry's foremost independent experts – and the most qualified group of storage professionals – under one roof.
As a platinum sponsor of Storage Decisions New York, IBM will have a multi-faceted presence at the conference with ample opportunities to engage with the storage community. One of the highlights is our Tech-in-Action talk, where IBM’s Storage Software Business Strategist Ron Riffe will outline IBM's point of view on The Critical Decisions for Improving The Economics of Storage. Ron will touch on a range of considerations including the need for improved administration, the role of software-defined and the impact of flash – just to name a few.
Over the course of the two-day event, IBM storage experts will be available in booths 24 and 25 to meet attendees and discuss practical solutions to today’s storage challenges. The IBM booth will also be where attendees can pick up their complimentary conference USB key which will loaded with conference-wide materials and presentations.
Storage Decisions New York is worth taking a look at as a great way to kick-off the fall conference cycle. If you're planning to attend stop by and visit us. If you happen to be on the west coast and concerned that New York is too far to travel, don't worry Storage Decisions is stopping in San Francisco on October 30.
Last December while attending the 2012 Gartner Data Center Conference in Las Vegas, I listened to an insightful presentation by analysts Sheila Childs and Pushan Rinnen on the bring-your-own-device (BYOD) phenomenon. They were particularly focused on issues related to protecting an organizations data in a BYOD world (more on why in a moment). One scenario that captured my attention went something like this.
It’s my device. I had it before I brought it to work and I was using Dropbox or iCloud to sync and share all my files. Now, my device has work data on it too. My security-conscious CIO doesn’t want work data shared on those public services. But I’m accustomed to, and almost dependent on my sync and share capability and my organization hasn’t yet given us a private alternative.
Now, in my roles as a technology strategist I spend a good bit of time helping to plan our investments. With the speed at which mobile and social technologies are sweeping through organizations, I have to admit the case that Sheila, Pushan and other Gartner analysts made that week for the rapidly emerging data protection crisis in BYOD sync and share was compelling. It occurred to me that credible vendors who were able to solve the problem in short order would be in high demand. That was eight months ago.
Fast forward seven months
In July, Forrester analysts Ted Schadler and Rob Koplowitz published The Forrester Wave™: File Sync And Share Platforms, Q3 2013 in a quest to uncover those credible vendors. I liked the way they characterized the problem. “Employees’ need to synchronize files grew from a whisper to a scream over the past few years. . . .The scream will grow louder as the number of tablets will triple to 905 million by 2017 to join the billions of computers and smartphones used for work.” The report evaluated and scored 16 of the most significant solution providers against 26 criteria. Among the leaders was IBM SmartCloud Connections. You can see the complete list of leaders here.
Change is here
The interesting thing that most folks miss in the sync and share conversation is – it’s about more than just syncing and sharing. As BYOD smartphones and tablets begin to proliferate the workplace, document management will shift from email attachments and file servers into social collaboration. Forrester points to a further social shift from casual partner collaboration to compliant workflow in regulated industries.
That kind of data is important – and the reason that the Gartner analysts were focused on the data protection issues of this BYOD world. Organizations today have well matured processes for protecting data on file servers and email systems, usually with an enterprise backup product. I commented on this set of tools in my post on Forrester’s Take on Enterprise Backup and Recovery. But as corporate information is relocated from file servers and email systems to sync and share systems, Gartner had an unmistakable reminder for its customers, “Consumer File Sync/Share Is Not Backup”.
I agree! The good news is that IBM has taken the time to ensure its enterprise backup product, IBM Tivoli Storage Manager Suite for Unified Recovery, protects synched and shared files in IBM Connections with all the same efficiency it does file servers, email systems and most any other data important to an organization.
What is your organization doing with file sync and share? How are you protecting that information?
In the modern datacenter, there’s a lot of shifting going on when it comes to traditional storage management responsibilities. What used to be the domain of a central storage and backup administration team has been thrown up for grabs as server virtualization and software defined everything have entered the scene. I hinted at this a bit in my post Do IT managers really “manage” storage anymore? But let’s consider a practical example that’s quite common with clients I speak to. If you are going to VMworld 2013, plan on attending the IBM TSM for VE hands-on lab to get more details.
Microsoft SQL Server is the foundation for a lot of applications that are critical to business operation – meaning CIO’s and IT managers are interested in its recoverability. Those same CIO’s and IT managers are also interested in the recoverability of their VMware estates, the software defined compute (SDC) platform that houses those databases. For many clients, the problem is that these two domains are tightly guarded by two independent superheroes, and neither is specifically trained in storage.
Superhero #1: The database administrator (DBA)
Most DBAs that I’ve known have an almost personal connection with their databases. They care for them as they would their own children. The thought of leaving one unprotected (without a backup) equates to dereliction of duty. Ignoring the idea that it takes a village to raise a child (or in this case that there may be other members of the IT administration village like VMware admins and backup admins), SQL Server DBAs will often work alone with the backup tools Microsoft provides to ensure their databases are protected. Good for the SQL Server, but not so much for the surrounding infrastructure. For databases running on VMware, routine full backups even with periodic differential backups can consume a LOT of disk space and virtual compute resources, and also contribute to the I/O blender effect.
Superhero #2: The VMware administrator
TSM for VE in VMware vSphere web client
VMware administrators can be just as focused on their domain as DBAs are. Their attention is on being able to recover persistent or critical virtual machine (VM) images, regardless of what app happens to be riding along. VMware has done a nice job of creating and supporting an industry of tightly integrated backup providers. These tools can get at the VMware data through a set of vStorage API’s for Data Protection (VADP) and VMware administrators can manage them through vCenter plug-ins. But few VMware admins are completely aware of all the workloads that run on their VMs and even less aware of the unique recovery needs of all those workloads. It’s just hard to keep up.
Common ground exists
One tool that bridges the gap is IBM Tivoli Storage Manager for Virtual Environments (TSM for VE). Nicely integrated with both VADP and SQL Server, TSM for VE can bring together VMware administrators and the DBAs in ways that would make any IT manager smile. Here are two of the more common approaches.
We can each do our own thing – together
As noted above, SQL Server DBAs take full backups sprinkled with differentials. Even though this approach can tax server and storage resources, and contribute to the I/O blender effect, it is in the DBA comfort zone. When the app is running on a VMware virtual machine, the DBA has the option of storing those backups on disk storage associated with the VM. It’s a nice thing to do because it allows the VMware admin to stay within his comfort zone too. Using vCenter to drive a VADP integrated snapshot tool like TSM for VE, the VMware admin can capture a complete copy of the virtual machine, along with the SQL Server backups the DBA created. Since the likely use of such a snapshot would be to recover the VM and then recover the database from its backup, there’s really not a reason to include the source SQL Server database or logs in the snapshot. With TSM for VE, the VMware admin can exclude the source SQL Server database from being redundantly backed up adding to an already formidable set of built-in efficiency techniques (with TSM for VE, snapshots are taken incrementally – forever, and can be deduplicated and compressed). It’s a good compromise solution letting each admin stay in his or her comfort zone. But it can be better.
We can join forces and do something really great
With TSM for VE, VMware admins and SQL Server DBAs can put their heads together and choose to do something really great. For the DBA, it’s an exercise in less-is-more. The DBA stops doing her own backups. No more full or differential copies of the database. No more taxing resource usage on the VM. No more I/O blender effect. Just, no more. How? Well, with a VMware VADP integrated backup tool like TSM for VE, the snapshot of the VM is accompanied by a freeze and thaw of the SQL Server database (techno-speak for putting the database in a consistent state), just like what happens when a backup is independently initiated by a DBA. And with TSM for VE, as soon as the TSM server confirms that it has successfully stored the consistent snapshot in a safe, physically separate place, it will connect back with the SQL Server to truncate the database logs.
In addition to the less-is-more benefits above, think about the differences in restore with these two scenarios. When the DBA and VMware admin simply coexist, each doing their own thing, restoring the SQL Server database includes steps for restoring:
The VM snapshot to get the database backups in place
The full database backup
The subsequent differential backups
By comparison, when the DBA and the VMware admin join forces with TSM for VE, the steps are dramatically simplified. Restoring the snapshot equates to restoring a consistent copy of the database. And remember, because these snapshots are highly efficient, they can be taken quite frequently.
Going to VMworld 2013? Come visit IBM on the Solutions Exchange floor at booth #1545.
VMworld 2013 is just around the corner and at IBM, we’re gearing up for a great set of conversations with our joint clients. As you’re planning your agenda, here are a couple of things worth looking in to.
IBM has a lot of expertise to share when it comes to optimizing virtual environments. A few weeks in my Outside the Line – an interview on Virtualization optimization post, I was able to catch up with several of the experts who are leading this work. At VMworld, IBM will be showcasing these solutions on the Solutions Exchange floor at booth #1545.
IBM Tivoli Storage Manager for Virtual Environments (TSM for VE) is one of the mostefficient backup integrations that have been done with the VMware vStorage API’s for Data Protection (VADP). I offered some quick insights in my post VMware backup for the iPOD generation. At VMworld 2013, you’ll have an opportunity to take a test drive in the TSM for VE hands-on lab.
Are you going to VMworld? What are you most looking forward to?
For years Hollywood has been enamored with idea artificial intelligence. Beyond tabulating, beyond programmed responses, what would happen if a computer could learn, reason, analyze, predict? In short, what could computers do if they could think? Sadly, more often than not, Hollywood’s answer resulted in some kind of disaster. In 2001 a Space Odyssey, the HAL 9000 computer decided to kill the astronauts on Discovery One. In the 1983 film WarGames, the WOPR computer played games with global thermonuclear war, and in the Terminator franchise of movies, SkyNet attempted to exterminate the human race. Ugh!!
I’m proud that I work for a company who has a very different perspective on the potential of cognitive computing. Instead of blowing people up, IBMers around the world are developing cognitive systems to help us make better decisions.
Leading the Way to a New Computing Era
Number one on my list of Top 5 Observations from IBM Edge 2013 had to do with the cultural changes driving Big Data. The thing about big data is that, in large part, we don’t yet know what we don’t know. Read more...
Several months ago IDC published an interesting Market Analysis Perspective: Worldwide Enterprise Servers, 2012 — Technology Market that uncovered something quite contrary to conventional wisdom when it comes to virtualized server environments. Server virtualization, or software defined compute (SDC) as it is coming to be known, promised to control server sprawl and return balance to the portion of the IT budget allocated to servers. The IDC research confirms that the controlled server sprawl part of the promise has largely been realized. Since 2000, the worldwide spend on x86 stuff has actually declined from $70B to about $56B. Equally as important, environmental spending on power and cooling has leveled off too. The revelation that surprised most folks was the dramatic expansion in spend on management. Since 2000, spend on management has more than tripled reaching $171B and now accounts for 68% of IT spend on x86 infrastructure.
These days the measuring stick for server virtualization seems to be around VM density, or the number of virtual machines that can be supported on a single physical server. The theory has been that as VM density increases, management costs decline. Most IT managers I talk to can point to fairly good and increasing VM density in their environments. So what’s causing associated management costs to increase so much and what can IT managers do to improve the situation? Read more...
Recently, Forrester published The Forrester Wave™: Enterprise Backup And Recovery Software, Q2 2013. I wasn’t surprised by their suggestion that “CommVault [Sympana 10.0], EMC [Avamar 7.0 and NetWorker 10.1], IBM [TSM 6.4), and Symantec [Netbackup 7.5] lead the pack. It’s a tight four-horse race for the top honors — [they] all scored high on strategy and current offerings.” These are the four vendors that are always pushing and shoving on each other in analyst comparisons. The thing that caught my attention in this report was the expert job analyst Rachel Dines did in pealing back a complex market space to uncover some important strategic observations about each vendor. Read more...