I routinely follow a number of blogs by storage industry thought leaders. Among them is a usually insightful blog by EMC’s Chuck Hollis. Last Friday I read his post titled Software-Defined Storage – Where Are We? As Chuck described, the post was intended to explore “Where are the flags being planted? Is there any consistency in the perspectives? How do various vendor views stack up? And what might we see in the future?” The questions themselves captured my attention. First, they are great questions that everyone who is watching this space should want answered. Second, I wanted to see which vendors EMC was interested in comparing with. Notably missing from Chuck’s list was IBM, a vendor who both has a lot to say and a lot to offer on the subject of software defined. (read more)
IBM Systems Storage Software Blog
with Tags: storage-hypervisor X
Ron Riffe 100000EXC7 email@example.com Tags:  storage-hypervisor smartcloud_virtual_storag... software_defined_storage 1,125 Visits
Amalore Jude 270003DGKQ firstname.lastname@example.org Tags:  tpc storage-hypervisor srm storage-blog storage-management storage-resource-manageme... tivoli-storage-productivi... ibm-tpc 1,569 Visits
In continuation to my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray, I would like to go ahead and call out some of the advanced capabilities that Tivoli Storage Productivity Center offers.
Ron’s recent post on choosing the right storage hypervisor points out to ‘comprehensive performance monitoring’ as one of the key capabilities you need to successfully deploy cloud storage. This thought reinforces the need for sophisticated tools that can help you significantly reduce the burden on storage configuring (think of best practices) and performance monitoring.
Bottle neck analysis
It’s no longer the network administrator – when the system response is poor, it’s the storage administrator who gets the call. Especially in a virtualized environment, it is essential to have performance monitoring tools that provide a quick yet comprehensive view of the data path – to ascertain any bottlenecks. With Tivoli Storage Productivity Center, you will be able to see where the bottlenecks have occurred, for example one storage subsystem may be over utilized while another is underutilized.
Data Path Explorer offers detailed view of all the storage entities and their connectivity. It provides you performance information across the entire data path – from host to array – and allows to drill down and view performance metrics at the port-level. Standard Edition, the advanced module within Tivoli Storage Productivity Center, offers advanced reporting capabilities on bottleneck analysis.
According to a storage manager at a leading medical university, “With Tivoli Storage Productivity Center, I can quickly determine if there exists a bottleneck in the SAN infrastructure. Earlier it could take me days or sometimes weeks to figure that out. Now, I can do it in minutes”.
If you have recently deployed Tivoli Storage Productivity Center, make use of IBM’s Value Pack service offerings, which provide analysis of disk subsystem performance bottlenecks using native product tools. Talk to your IBM sales representative or IBM business partner for more information.
Configure your SAN the best practices way
I touched upon the SAN Planner topic briefly in my previous post – and would like to delve little deeper in this one. As mentioned earlier, SAN Planner is a wizard-based tool that assists storage administrators in end-to-end planning involving all storage components and related networks. SAN Planner helps implement best practices pertaining to replication relationships; it utilizes current and historical performance metrics to recommend the best configuration while commissioning new storage systems.
There are three planners associated with recommending storage configuration changes, which are based on current workloads, capacity utilization and best practices:Volume Planner – helps administrators in provisioning storage based on capacity, compression, RAID levels, etc. It includes replication planning as well, supporting sessions such as Metro Mirror, Global Mirror and FlashCopy.
Zone Planner – provides zoning and LUN masking configuration support.
Path Planner – assists in planning and implementing storage provisioning for hosts and storage systems with mutilpath support in fabrics.
All the three planners can be invoked separately or together in an integrated manner from Tivoli Storage Productivity Center console. Learn more about these planners and their capabilities: download the latest Redbooks.
As you can see, configuring SAN with Tivoli Storage Productivity Center is a child’s play, isn’t it? But can you check whether current SAN configuration conforms to best practices? Yes you can!
SAN Configuration Analyzer provides end-to-end check for configuration policies, ensures the correctness of storage network configurations, such as zoning, multipathing and replication. In addition, the tool sends alerts to administrators when the best practices are violated.
Storage networks are undergoing significant changes more often to accommodate changes in business policies and the ever growing data. Administrators are challenged to track configuration changes for problem determination, change management or auditing purposes. Tivoli Storage Productivity Center offers SAN Configuration History Viewer to provide a historic view of changes and eliminate time gap in determining problem areas associated with configuration changes.
To learn more about the IBM Tivoli Storage Productivity Center Select Series, contact your IBM sales representative or IBM Business Partner, or visit ibm.com.
Click here to join the virtual dialogue on Storage Hypervisor; share your thoughts and concerns in our group chat on October 7, 2011 from 12 noon to 1pm Eastern Time. You can log in now for a preview of topics.
Martha Westphal 0600012U29 email@example.com Tags:  self-service pay-per-use storage cloud-storage virtual-storage storage-hypervisor service-catalog 1,026 Visits
And don't forget to listen to the 'open mic' conversation about Storage Hypervisors with IBM's Ron Riffe, the author of this blog series, and ESG analyst, Mark Peters:
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  pay-per-use self-service service-catalog virtual-storage storage cloud-storage storage-hypervisor 1 Comment 5,546 Visits
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use. It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
Stay tuned for Part III of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
The conversation is building! Earlier this week, fellow IBM blogger Tony Pearson joined the conversation with a perspective on Storage Hypervisor integration with VMware. And IBM blogger Rich Vining added a perspective on VMware Data Protection with a Storage Hypervisor. To cap it off, we just completed our first live group chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
Join the conversation! The virtual dialogue on this topic will continue in another live group chat on September 30, 2011 from 12 noon to 1pm Eastern Time.
Ron Riffe 100000EXC7 email@example.com Tags:  virtual-storage storage-hypervisor storage vmotion vmware live-partition-mobility cloud-storage powervm 1 Comment 5,723 Visits
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Amalore Jude 270003DGKQ firstname.lastname@example.org Tags:  storage-hypervisor storage-software tivoli-storage-productivi... tpc storage-management srm storage-resource-manageme... storage-blog 1,285 Visits
IT managers are broadly exploiting virtual server infrastructures -- hypervisors -- to improve efficiency, provide for transparent mobility, and give common manageability and capabilities regardless of type of server hardware being used. These same robust benefits are now available for virtual storage infrastructures with the IBM storage hypervisor (System Storage SAN Volume Controller and its management console the Tivoli Storage Productivity Center).
Listen to the webcast to understand how the IBM storage hypervisor can be a complimentary next step in the overall IT environment virtualization process.
Click and learn more about IBM System Storage SAN Volume Controller and IBM Tivoli Storage Productivity Center. Please reach out to IBM sales or IBM business partners to understand how IBM storage hypervisor solution can benefit your organization's effort to virtualize and efficiently manage storage.