IBM Systems Storage Software Blog
Martha Westphal 0600012U29 firstname.lastname@example.org Tags:  self-service pay-per-use storage cloud-storage virtual-storage storage-hypervisor service-catalog 942 Visits
And don't forget to listen to the 'open mic' conversation about Storage Hypervisors with IBM's Ron Riffe, the author of this blog series, and ESG analyst, Mark Peters:
Ron Riffe 100000EXC7 email@example.com Tags:  pay-per-use self-service service-catalog virtual-storage storage cloud-storage storage-hypervisor 1 Comment 5,274 Visits
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use. It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
Stay tuned for Part III of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
The conversation is building! Earlier this week, fellow IBM blogger Tony Pearson joined the conversation with a perspective on Storage Hypervisor integration with VMware. And IBM blogger Rich Vining added a perspective on VMware Data Protection with a Storage Hypervisor. To cap it off, we just completed our first live group chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
Join the conversation! The virtual dialogue on this topic will continue in another live group chat on September 30, 2011 from 12 noon to 1pm Eastern Time.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  productivity vadp vsphere backup protection recovery volume controller cloud manager hypervisor vm san tivoli center restore vcenter storage unified data 3,714 Visits
I recently read an excellent post by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Efficient storage: Here’s a scenario we see…
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Storage hypervisor platform: IBM System Storage SAN Volume Controller (SVC)
Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition (TPC SE)
Data protection integration: Tivoli Storage FlashCopy Manager and Tivoli Storage Manager Suite for Unified Recovery
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Come Listen to Tivoli Storage: Simplify Data Protection and Reduce Costs with Unified Recovery Management
Martha Westphal 0600012U29 email@example.com Tags:  ibmsoftware ibmstorage tsm storage tivoli management 1,489 Visits
Simplify Data Protection and Reduce Costs With Unified Recovery Management
On September 22, we will be hosting an educational webcast that will address the challenges of providing data protection and recovery for rapidly growing amounts of diverse enterprise data. During this call, you will hear about our unified recovery management solution that can help reduce complexity, risk and costs. Included in this solution is a new simple, value-based option for procuring and managing software licenses.
Speaker: Rich Vining, Product Marketing Manager
Date: September 22, 2011
Time: 11:00 AM Eastern US
Please register for this event using this link.
After registering you will receive a confirmation note with call-in instructions.
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  virtual-storage storage-hypervisor storage vmotion vmware cloud-storage live-partition-mobility powervm 1 Comment 5,362 Visits
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Maria Huntalas 1200007VFS email@example.com Tags:  storage unified-recovery-manageme... hypervisor storage-cloud tivoli-storage tivoli-storage-manager 1,296 Visits
The Global Tivoli User Community is hosting an all-day, Storage Focused Tivoli User Group event in New York City October 11th that will cover:
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  snapshot tivoli-storage-manager storage-blog flashcopy-manager storage-management flashcopy tsm ibmstorage storage tivoli ibmsoftware storage-software 2 Comments 3,130 Visits
I wanted to let everyone know that IBM Tivoli Storage FlashCopy Manager for Windows Version 2.2.1 was just released!
In June of this year, I blogged about IBM Tivoli Storage FlashCopy Manager version 2.2.0. I talked about how FlashCopy Manager 2.2 provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows 2.2.0 added new support for Microsoft Exchange Server 2010 and Microsoft SQL Server 2008 R2 as well as other enhanced performance and functionality.
We continue to add more functions and features to IBM Tivoli Storage FlashCopy Manager. This past Friday (December 10th, 2010), IBM released IBM Tivoli Storage FlashCopy Manager Version 2.2.1 with the following changes:
Updates Applicable to All Platforms
Updates Applicable to all FlashCopy Manager components that run on AIX, Linux, and Solaris
Updates Applicable to the FlashCopy Manager for Exchange Component
Updates Applicable to the FlashCopy Manager for SQL Component
For more details on the content of this Fix Pack, refer to the technote titled What's new in the Version 2.2.1 IBM Tivoli Storage FlashCopy® Manager Fix Pack.
For details on downloading this Fix Pack, refer to the technote titled Version 2.2.1: Fix Pack IBM Tivoli Storage FlashCopy® Manager.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm tivoli storage service-management customer-reference storage-management storage-blog 1,324 Visits
The Central Depository Company of Pakistan Limited (CDC) is the only depository in Pakistan, handling the electronic settlement of transactions carried out at the country's three stock exchanges.
With numerous point management tools, time-consuming manual processes and no single help desk, IT administrators were constantly operating in a reactive mode and faced just 90 percent system availability.
IBM Business Partner Gulf Business Machines helped CDC implement an Integrated Service Management solution from IBM that increases IT efficiency while improving the effectiveness of business services.
90 percent reduction in average time for root cause analysis; estimated 50 percent reduction in time to support new lines of business; 98 percent improvement in service level agreement (SLA) levels.
"IBM Tivoli Storage Productivity Center gave us greater visibility into storage utilization, helping us optimize capacity planning and improve our storage ROI to save 30%"
—Syed Asif Shah, Chief Information Officer, Central Depository Company of Pakistan Limited
Read the complete case study for more details on the solutions used for CDC to implement and Integrated Service Management solution.
More success stories of other customer implementations of IBM technologies can be found here.
Re: IBM Smarter Systems Announcement Webcast: Taming the Information Explosion with IBM System Storage
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage infrastructure information webcast ibmsoftware storage-event storage-blog 1 Comment 1,340 Visits
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage case-study customer-reference success-story ibmstorage niu tsm xiv data-storage storage-blog 1,129 Visits
Sondra Ashmore 060000GRCD SASHMORE@US.IBM.COM Tags:  center xiv storage-blog tpc management sspc system productivity storage tivoli-storage-productivi... srm 3 Comments 2,578 Visits
I have been working in storage and storage management my entire career (which has been more years than I want to admit) and I was recently advised by a wise co-worker to start writing about it. Although blogging has been around for quite some time and has certainly increased in popularity in recent years, this is the first time I have braved this form of communication. I stared at a blank blinking cursor for inspiration and decided to write about one of my favorite storage products, the Tivoli Storage Productivity Center.
Several weeks ago IBM announced the new 4.2 release of Tivoli Storage Productivity Center. This release includes some interesting enhancements that I am excited to see in the product. One feature that has received a lot of buzz is the lightweight storage resource agents. TPC started down the path of lighter agents when they introduced a slimmer, but not completely lightweight version of the agents by moving from Java to C for enhanced performance. These new agents were limited to Windows, AIX, and Linux. The new 4.2 release added HP-UX and Solaris support as well as support for file and database-level management. The new release is backward compatible meaning that customers who want to continue using agents they set up previously can do so. New customers are no longer required to use the Common Agent Manager.
TPC 4.2 has introduced full support for XIV devices. TPC 4.1 did have toleration support for XIV (basic discovery and capacity information), but the new release you can provision, get performance information, and use the data path explorer for your XIV machine.
If you have TPC deployed on a System Storage Productivity Center (SSPC) can upgrade at any time. Customers buying a new SSPC machine after September 3, 2010 will automatically have TPC 4.2 pre-installed on the machine.
I could say a lot more about the new TPC 4.2 release, but instead I am going to point you to a wonderful blog entry that my colleague, Tony Pearson, wrote when the new release was announced. He provides some great insights about the new features in TPC 4.2.
Wow - I made it to the end of my first blog and I am beginning to understand why blogging has become so popular. I am starting to wonder why it took me so long to write my first blog?
Re: Silverstring Launches Predatar 6 for TSM to Deliver Smarter Enterprise Data Protection, Near Perfect Backup Success
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli smarter storage university planet software manager predatar silverstring aberdeen ibm partner alistair mackenzie brian robertson business 1,293 Visits
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  asset automation user usergroups usergroup storage tivoli community maximo group tuc security tivuser ug 2 Comments 1,538 Visits
Delbert Hoobler 1000008PR6 email@example.com Tags:  exchange storage-software flashcopy tivoli storage-blog storage tsm snapshot storage-management 9 Comments 10,731 Visits
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database.
DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
IBM Tivoli Storage Manager for Mail : Data Protection for Exchange and IBM Tivoli Storage FlashCopy Manager completely support backing up DAG passive database copies. Data Protection for Exchange and FlashCopy Manager also support using those backups to recover the production database as well as for recovering individual mailboxes and items. You can find more details in the IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange Server Installation and User's Guide V6.1.2.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibmstorage storage-blog tivoli data-protection data-availability storage healthcare ibm virtualization 1,608 Visits
Working with IBM, a hospital in Asia Pacific gained a data protection solution that meets users' data availability requirements,
scales on demand to support a growing warehouse of patient data and medical images, and simplifies data migration and
data recovery tasks.
The benefits of the solution include a 50% reduction in backup window; restores individual Microsoft Exchange objects in minutes;
restores systems in under 10 minutes.
Read the complete case study to see how this Asian Pacific hospital gained peace of mind with virtualixed data protection from IBM.
More success stories of other customer implementations of IBM technologies can be found here