IBM Systems Storage Software Blog
Delbert Hoobler 1000008PR6 firstname.lastname@example.org Tags:  exchange storage-software flashcopy tivoli storage-blog storage tsm snapshot storage-management 9 Comments 11,000 Visits
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database.
DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
IBM Tivoli Storage Manager for Mail : Data Protection for Exchange and IBM Tivoli Storage FlashCopy Manager completely support backing up DAG passive database copies. Data Protection for Exchange and FlashCopy Manager also support using those backups to recover the production database as well as for recovering individual mailboxes and items. You can find more details in the IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange Server Installation and User's Guide V6.1.2.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy:
Delbert Hoobler 1000008PR6 email@example.com Tags:  storage-software storage tivoli storage-blog tsm storage-management 8 Comments 7,771 Visits
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
I encourage you to take a look at Windows VSS snapshots and FlashCopy Manager to see how they might help you. Enjoy!
Sudipta Datta 270004UTF4 firstname.lastname@example.org Tags:  virtualization customer server cloud software client manage telecom virtual-storage virtualization;data storage study infrastructure ; case manager it quote 7,673 Visits
Server virtualization and storage virtualization go hand in hand. Centralized, virtualized storage is crucial for advanced server virtualization to be flexible and easy to manage. Companies are realizing that to unleash the real potential of cloud agile infrastructure, storage virtualization has to become mainstream like server virtualization.
For many companies, there is a constant need for additional storage resources to support growing volumes of information. And if you don’t focus on managing your storage infrastructure, you can find that one virtual server is running out of storage capacity, even while there is ample capacity in other parts of the network.
With storage virtualization, companies now can make better use of existing investments in disk capacity and can often postpone the need to purchase additional capacity. As storage becomes virtualized, it becomes easier to manage and helps companies to adapt to business needs much faster. And not to forget, it actually costs less!
Ron Riffe 100000EXC7 email@example.com Tags:  virtual-storage storage-hypervisor storage vmotion vmware live-partition-mobility cloud-storage powervm 1 Comment 5,607 Visits
To borrow a phrase from a fellow blogger… Interest from customers on cloud storage is very, very hot, and that’s been keeping us very, very busy. The interest underscores the fact that public storage cloud providers have sent a “cost shockwave” through the industry and customers are taking notice.
While CIO’s may still be too concerned about security and service levels to put much real corporate information in the public cloud, they have taken notice that these service providers are offering storage capacity at prices that are often lower than what they are paying for their own private storage. Sure, a service provider theoretically has more economy of scale and so could demand a better price from their hardware vendors, but they also have some profit margin to build into their “service”. There has to be more to it. The customers I talk to are wondering what these service providers are doing to operate at those costs – and if any of their techniques can be applied in a private storage environment.
The situation begs the question “what is it that differentiates these public storage clouds from the traditional private storage environments that most clients operate?” From our experience with customers, there are four significant differences.
In this post, I’m going to try to explain these four concepts in sufficient detail that somebody responsible for a private storage environment could walk away with some practical recommendations that could save their company a pile of money. Most of this isn’t really original (the concepts have been around for a while), but so few enterprises operate this way that the person who introduces their company to these ideas often looks like a genius (and who doesn’t like that!!). It’s a long topic, so I’ve broken it into 3 posts.
Storage resources are virtualized. Do you remember back when applications ran on machines that really were physical servers (all that “physical” stuff that kept everything in one place and slowed all your processes down)? Most folks are rapidly putting those days behind them. Server hypervisors and the virtual machines they manage have improved efficiency (no more wasted compute resources), freed up mobility, and ushered in a whole new “cloud” language.
Well, the same ideas apply to storage. As administrators catch on to these ideas, it won’t be long before we’ll be asking the question “Do you remember back when virtual machines used disks that really were physical arrays (all that “physical” stuff that kept everything in one place and slowed all your processes down)?”
But storage hypervisors are more, much more than just virtual slices and data mobility. Remember, we’re trying to think like a service provider who is driving cost out of the equation. Sure, we’re getting high utilization from allocating virtual slices, but are we being as smart as we could be about allocating those slices? A good storage hypervisor helps you be smart.
Are you getting the picture of why so many enterprises are beginning to agree with Gartner that a storage hypervisor can be a great first step in transitioning traditional IT into a private cloud storage environment? Application owners come to you for storage capacity because you’re responsible for the storage at your company. In the old days, if they requested 500GB of capacity, you allocated 500GB off of some tier-1 physical array – and there it sat. But then you discovered storage hypervisors! Now you tell that application owner he has 500GB of capacity… What he really has is a 500GB virtual volume that is thin provisioned, compressed, and backed by lower-tier disks. When he has a few data blocks that get really hot, the storage hypervisor dynamically moves just those blocks to higher tier storage like SSD’s. His virtual disk can be accessed anywhere across vendors, tiers and even datacenters. And in the background you have changed the vendor storage he is actually sitting on twice because you found a better supplier. But he doesn’t know any of this because he only sees the 500GB virtual volume you gave him. It’s “in the cloud”.
Ron Riffe 100000EXC7 firstname.lastname@example.org Tags:  pay-per-use self-service service-catalog virtual-storage storage cloud-storage storage-hypervisor 1 Comment 5,448 Visits
This is part 2 of a 3 part post on how somebody responsible for a private storage environment could save their company a pile of money by implementing cloud storage techniques. Part I introduced the concept of a storage hypervisor as a first step in transitioning traditional IT into a private cloud storage environment. In this 2nd post, I’m going to explain some of the key storage cloud management controls that can be used to drive down cost.
Storage services are standardized. When it comes to shopping, I avoid (at almost all costs) actually going to the store. You can keep all the time and frustration of traffic, fighting for a parking place, wondering aimlessly through aisles of choices, and standing in checkout lines. I’ll take the simplicity and speed of a good online catalog any day.
The idea of shopping from a catalog isn’t new and the cost efficiency it offers to the supplier isn’t new either. Public storage cloud service providers seized on the catalog idea quickly as both a means of providing a clear description of available services to their clients, and of controlling costs. Here’s the idea… I can go to a public cloud storage provider like Amazon S3, Nirvanix, Google Storage for Developers, or any of a host of other providers, give them my credit card, and get some storage capacity. Now, the “kind” of storage capacity I get depends on the service level I choose from their catalog. These folks each offer a small few different service level options. Amazon S3, for example, offers Standard Storage or Reduced Redundancy Storage (can you guess which one costs less?).
Most of today’s private IT environments represent the complete other end of the pendulum swing – total customization. Every application owner, every business unit, every department wants to have complete flexibility to customize their storage services in any way they want. This expectation is one of the reasons so many private IT environments have such a heavy mix of tier-1 storage. Since there is no structure around the kind of requests that are coming in, the only way to be prepared is to have a disk array that could service anything that shows up. Not very efficient… There has to be a middle ground.
Now, for each catalog entry, there are a variety of service levels that can be defined that cover things like capacity efficiency, I/O performance, data access resilience, and disaster protection. By this point you’re probably rolling your eyes because you know your application owners… and they’re going to want every byte of their data to have the highest available service in each of these areas (and you wonder why you have so much tier-1 storage). A little bit further into this post we’re going to talk about the wonder of usage-based chargeback, but we’re getting ahead of ourselves. For now, let’s assume you’re having a coherent conversation with your application owners and are able to define realistic needs for your database data. Maybe something like this…
From there, you’re back to the wizard. Actually defining the attributes of the catalog entry is a little mundane (lot’s of propeller head knobs and dials to turn), but once you’re done – you’re done! – and life get’s really efficient. So, let’s get the mundane stuff out of the way. First are the capacity efficiency and I/O performance attributes (be sure and notice that for “Database” we are telling the catalog we want virtual volumes – from a storage hypervisor. There will be a test in a paragraph or so :-)
Then the data access resilience attributes.
And finally the disaster protection attributes.
I told you it was a little mundane. But now come the exciting results that really drive cost out of the environment and save a huge amount of administrative time.
First is capital expense. You’re running mostly tier-1 disk arrays today. You have just finished defining the fifteen-ish catalog entries your company is going to use. Some, like “Database”, call for storage services that are often associated with tier-1 disk arrays. Most others don’t. With a little intelligent forecasting, you should be able to determine exactly how much tier-1 storage capability you really need, and how much lower-tier storage you can start using We’ve seen clients shift their mix from 70% tier-1 to 70% lower-tier storage (pretty significant capital expense shift). And if the thought of moving all that existing data from tier-1 to a lower tier makes you shudder, refer back to Part I of this post and look again at the data mobility provided by a good storage hypervisor (Test: did you notice that for “Database” we told the catalog we wanted virtual volumes – from a storage hypervisor…).
The second big savings is in operational expense (keep reading).
Here comes the request from an application owner for 500GB of new “Database” capacity (one of the options available in the storage service catalog) to be attached to some server. After appropriate approvals, the administrator can simply enter the three important pieces of information (type of storage = “Database”, quantity = 500GB, name of the system authorized to access the storage) and click the “Go” button (in TPC SE it’s actually a “Run now” button) to automatically provision and attach the storage. No more complicated checklists or time consuming manual procedures.
Storage is paid per use. It’s the little appreciated – but incredibly powerful tool in the quest to drive cost out of the environment. When end users are aware of the impact of their consumption and service level choices, they tend to make more efficient choices. Conversely (we all know what happens here), when there’s no correlation between service level choices and end user visibility to cost… well… you have a lot of tier-1 storage on the floor.
A chargeback tool like IBM Tivoli Usage and Accounting Manager (TUAM) completes the story we have been building…
Stay tuned for Part III of this post where I’ll explore some technical thoughts you’ll want to consider when picking a storage hypervisor.
The conversation is building! Earlier this week, fellow IBM blogger Tony Pearson joined the conversation with a perspective on Storage Hypervisor integration with VMware. And IBM blogger Rich Vining added a perspective on VMware Data Protection with a Storage Hypervisor. To cap it off, we just completed our first live group chat with over 30 IT managers, industry analysts, independent bloggers, and IBM storage experts.
Join the conversation! The virtual dialogue on this topic will continue in another live group chat on September 30, 2011 from 12 noon to 1pm Eastern Time.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  ibm storage-software storage ibmstorage tivoli ibmtivoli storage-blog storage-management 1 Comment 5,291 Visits
Welcome to the Tivoli Storage blog.
We have gathered a team of SMEs from various areas of the business to discuss a variety of topics, spanning different interest areas including customer success stories, upcoming events, Business Partner spotlights, technical tips and tricks, product strategy, roadmaps and hot topics -- and of course, topics of interest to you!
Introducing the team!
BJ Klingenberg: Senior Technical Staff Member - Storage Software, IBM Software Group
BJ has over 25 years of storage software strategy and development experience. He has held various technical and management positions, nearly all of which have been related to storage software. His experience in Enterprise storage management includes DFSMS, DFSMShsm, DFSMSdss, and also Tivoli Storage Manager, Tivoli Storage Productivity Center (TPC) as well as System Storage SAN Volume Controler (SVC). He has also been involved in projects which apply ITIL management best practices to Enterprise Storage Management. BJ is currently focusing on storage archiving solutions. BJ is a graduate of the University of Illinois Urbana/Champaign where he received a Bachelor of Science degree in Computer Science, and holds a Master of Science Degree in Computer Science from the University of Arizona
Dave Rice: Business Partner Marketing, Tivoli Storage Software
Dave currently works in IBMs Worldwide Software Group where he drives Business Partner Marketing for Tivoli storage software and also has a focus on Asia Pacific and Japan geographies. In this role, Dave influences Business Partner sales pipeline through, lead/pipeline analysis, progression activities, partner communications, and implementing programs that provide Business Partner Opportunity Identification. Dave has been in a broad set of storage software marketing roles for the past 13 years, and has 35 years with IBM. Outside of IBM, Dave's interests include astronomy, as well as home and life improvement projects.
Del Hoobler: Senior Software Engineer
Del is a Senior Software Engineer that has worked for IBM for over 20 years in software design, development and services. For the past 13 years, he has worked on designing and developing software products for the IBM Tivoli Storage Manager (TSM) suite of products. Most recently, Del was the technical development lead for the TSM Windows snapshot (VSS) support for Microsoft Exchange Server and Microsoft SQL Server. Del enjoys working with people and helping solve their complicated IT problems.
Devon Helms is currently an intern with the IBM Tivoli Software group and a second year MBA candidate at the Paul Merage School of Business at UC Irvine. His studies are focus on business strategy and corporate finance. Before returning to the academic world to pursue his MBA, Devon was a business operations and technology consultant. He has been involved in hundreds of engagements, analyzing and improving his customers business processes. After his studies are complete, Devon wants to continue to help clients improve the performance of their businesses through business process and financial analysis. In his free time, Devon is an avid marathon runner, rock climber, and SCUBA diver. Devon lives in Lakewood, CA with his lovely wife, Shana and his 8 year old Siberian Husky and faithful running partner, Frosty.
Greg Tevis: Tivoli Storage Technical Strategist
Greg has over 27 years in IBM storage hardware and software development. He worked in ADSM/TSM architecture and technical support in the 1990s and was one of the original architects of IBM's storage resource management solution, Tivoli Storage Productivity Center (TPC). He currently has responsibility for technology strategy for all Tivoli Storage and was involved in all of the recent IBM Storage acquisitions including XIV, Diligent, FilesX, Novus Consulting, and Arsenal Digital.
Jason has been the product manager for the Tivoli Storage Productivity Center (TPC) family since joining IBM in 2006. Prior to joining IBM, Jason was a product manager at EMC and Prisa Networks, responsible for the road map and strategy of various storage management offerings. When not helping define the direction for TPC, Jason acts as the President for Classic Soccer Club, a youth soccer club where his son currently plays.
John Connor: Product Manager
John is the Product Manager for IBMs flagship data protection and recovery offerings, the Tivoli Storage Manager family. During Johns tenure as product manager, TSM has experienced strong growth; growing faster than the overall market, and gaining market share. Prior to joining the Tivoli Storage Manager team in 2005, John helped drive the business strategy for IBM Retail Store Solutions. Prior to that, John had product and marketing roles in various IBM software businesses including WebSphere and networking software. John has an MBA from Duke University and an undergraduate degree in electrical engineering from Manhattan College. In his spare time, John enjoys competing in triathlons and has successfully completed an Ironman triathlon.
John R. Foley Jr.: Product Marketing Manager
John is currently a marketing manager within IBM's Tivoli storage software marketing team. John has over 20 years of experience in the areas of storage hardware, storage software and system networking. He has held positions in management, product line management, strategy, business development and marketing. In the past 10 years, he has served on multiple storage projects including SAN storage (fibre channel & iSCSI), Network Attached Storage (NAS) and fibre channel switch offerings. Most recent projects include the introduction of IBM's System Storage N series portfolio stemming from the NetApp OEM agreement and the release to market of IBM's newly introduced Tivoli Storage Productivity Center Version 4 and IBM Information Archive Version 1.
Kelly Beavers: IBM Storage Software Business Line Executive
Kelly joined the IBM Storage Software team in 2004 as Director of Strategy and Product Management for Storage Software and Solutions. Her team is responsible for guiding the development and release of products that capitalize on market/technology trends, and for defining and executing tactical go-to-market plans for IBM storage software solutions across both the Tivoli and Systems Storage brands. Kelly has 28 years with IBM where she's held a variety of roles including Finance, Pricing, Tivoli Channel Development, Director of Customer Insight, managing Market Intelligence, Customer Relations and Marketing Operations. Kelly is married with two daughters, ages 19 and 12.
Matt Anglin: Tivoli Storage Manager Development
Matt has been a member of the Tivoli Storage Manager Server Development Team for 15 years. His areas of expertise include data movement to and within the server, deduplication, shredding, and DB2 interactions. He is the AIX platform export in TSM, and is knowledgeable about other Unix, Linux, and Windows plaforms. Matt lives in Tucson, Arizona.
Matthew Geiser: Manager, Storage Software Product Management
Matt joined IBM in 2001 and has worked in product management and product development for Storage Software offerings including SAN Volume Controller, Tivoli Productivity Center, Tivoli Storage Manager and IBM Information Archive. Matt's current responsibilities include managing the product management team for the storage infrastructure management offerings. Prior to IBM, Matt worked in a variety of operations, project management and software development roles in the banking and energy industries.
Milan Patel: Senior Product Marketing Manager
Milan is responsible for Product Marketing of IBM storage software for virtualized server environments, storage clouds and of course every day issues in storage management like backup, recovery, archiving and replication. Milan has been with IBM for over 6 years working in server and storage systems and storage software marketing groups. Prior to that, Milan spent 13 years in various capacities from development to product management of various server subsystems and systems management.
Richard Vining: Product Marketing Manager
Rich is the Product Marketing Manager responsible for the IBM Tivoli Storage Manager portfolio of products. Rich joined IBM in April 2008 as part of the acquisition of FilesX, where he served as Director of Marketing. Rich has more than 20 years of experience in the data storage industry, holding senior management roles in marketing, alliances, customer support and product management at a number of leading edge companies, including Signiant, OTG Software, Plasmon and Cygnet. Rich enjoys eating, drinking, travelling and golfing (but doesn't everybody?)
Rodney Fannin: Worldwide Channel Manager, Tivoli Storage Software
Rodney has over 15 years of experience in working with Business Partners. Primary responsibilities include refining the channel strategy for Storage software and developing sales and marketing tactics to increase reseller revenue worldwide. Rodney is also a contributing author for the BP Spotlight on our blog.
Roger Wofford: Product Manager
Roger is currently a Product Manager in Tivoli Storage Software. He has experience in Manufacturing, Development, Marketing and Sales within IBM. He enjoys golf, swimming and the Rocky Mountains. Roger plans to blog about how customers use archiving solutions in their storage environments.
Ron Riffe: IBM Storage Software Business Strategist
Ron is currently the business strategist for IBM Storage Software. During the last six years, Ron has been devising and implementing IBM's storage software strategy with a focus on creating greater client value through integrating IBM storage software and storage hardware offerings. Ron has managed storage systems and storage management software for more than 23 years, holding positions in senior management, product line management, strategy and business development for both IBM System Storage and IBM Tivoli Storage. Ron has written papers on the synergies of storage automation and virtualization and frequently speaks at conferences and customer locations on the subject of storage software. Prior to joining IBM, Ron spent 10 years as a corporate storage manager for international manufacturing firm Texas Instruments after receiving a B.S. in Computer Science from Texas A&M University.
Shawn Jaques: Manager, IBM Tivoli Storage Product Management
Shawn has been in his current role as manager of storage software product management for nearly three years. The team is responsible for product strategy, content, positioning and pricing of IBM storage software solutions. Prior, Shawn had product and market management roles in other Tivoli product areas as well as a stint in Tivoli Strategy. Before joining IBM, Shawn was a Consulting Manager at Cap Gemini consulting and an Audit Manager at KPMG. Shawn has a Master of Business Administration from The University of Texas at Austin and a Bachelor of Science from the University of Montana. He lives in Boulder, Colorado and enjoys fly-fishing, skiing and hiking with his wife and kids.
Terese Knicky: Analyst Relations Tivoli
Terese is with Tivoli's analyst relation team covering Storage, System z, Job Scheduling and IBM's General Enterprise solutions. Terese was born and raised in Omaha, NE and transplanted to Texas where she enjoys watching her two boys play college football.
And finally, let's talk about me. I'm Tiffeni Woodhams and I have been with IBM for nearly seven years. Currently, I am a Tivoli Storage Marketing Manager where I am responsible for general marketing activities, ranging from pipeline measurement and tracking, providing marketing execution guidance and communications to the geography teams; Tivoli Storage Social Media lead and co-lead for IBM Storage Social computing strategy. I also work on major launches like Dynamic Infrastructure and Information Infrastructure providing the storage messaging and linkages. Prior to this role, I have held several other marketing positions including Tivoli Provisioning Go-to-Market Manager, Benelux Software Marketing Manager focusing on Tivoli, WebSphere, and Lotus, Americas Tivoli Marketing Manager, and Tivoli Launch Strategist. In my spare time, I enjoy playing sports (basketball, softball, and golf), coaching JV girls basketball, riding horses, and spending time with family and friends.
Now that you know a little background on each of the team members, we hope that you will let us know some of your interest areas when it comes to IBM Storage and IBM Tivoli Storage Software solutions. Please post comments to this blog and let us know what you want to hear about.
Some topics we will be discussing in the next month include:
Pulse 2010, the Premier Service Management Event
Data Reduction - the steps to get to where you want to be
Archiving - why you need to do it
Unified Recovery Mangement
New Product announcements and roadmaps.
Thanks and we look forward to hearing your feedback.
steve wojtowecz 270003B7NV email@example.com Tags:  tsm tpc tivoli-storage-manager cloud-storage managment storage 5,026 Visits
Every year I try to publish a set of storage trends that I believe most IT shops are trying to address and where technologies exist to help resolve. Here are my thoughts for 2012...
1) Storage breakthroughs nipping the “Digital Dark Age” in the bud
Since the early 1990’s, an increasing proportion of data created and used has been in the form of digital data. Today, the world produces more than 1.8 zettabytes of digital information a year. Yet, digital storage can in many ways be more perishable than paper. Disks corrode, bits “rot” and hardware becomes obsolete. This presents a real concern of a “Digital Dark Age” where digital storage techniques and formats created today may not be viable in the future as the technology originally used becomes antiquated. We’ve seen this happen—take the floppy disk for example. A storage tool that was so ubiquitous people still click on this enduring icon to “save” their digital work and any word, presentation or spreadsheet documents—yet most Millennials have never seen it in person. But new research shows storage mediums can be vastly denser than they are today. While new form factors such as solid state disks will help us provide more stable longer-term preservation of data, and the promise of "the cloud" allows access to data anywhere, anytime. Recently, IBM researchers combined the benefits of magnetic hard drives and solid-state memory to overcome challenges of growing memory demand and shrinking devices. Called Racetrack memory, this breakthrough could lead to a new type of data-centric computing that allows massive amounts of stored information to be accessed in less than a billionth of a second. This storage research challenges previous theoretical limits to data storage—ensuring our digital universe will always be preserved.
2) Data curation will provide structure in midst of the data deluge
Now that we have the capability to preserve our digital universe, we need to find a way to make it useful. We need to take the next step past data preservation to data curation. Data curation is the active and ongoing management of data through its lifecycle. This smarter data categorization adds value to data that will help glean new opportunities, improve the sharing of information and preserve data for later re-use. Social media is a great example of the power of curated data. Sites like FaceBook, Google+, Pinterest, etc. compile our digital lives and gives their users a platform to organize their content. However, there's also a lot of work involved in selecting, appraising and organizing data to make them accessible and interpretable. The key is bringing data sets together, organizing them and linking them to related documents and tools. If data can be stored in a way that provides context, organizations can find new and useful ways to use that data.
3) Storage analytics will open new business insights
With data curation allowing organizations the platform to better utilize their data, analytics will help turn that data into intelligence and, ultimately, knowledge. With the information that historical trending analytics and infrastructure analytics provides, you can index and search in a more intelligent way than ever before. By doing analytics on stored data, in backup and archive, you can draw business insight from that data, no matter where it exists. The application of IBM Watson technology for healthcare provides a good example. Watson collects data from many sources and is able to analyze the meaning and context. By processing vast amounts of information and using analytics, it can suggest options targeted to a patient's circumstances, can assist decision makers, such as physicians and nurses, in identifying the most likely diagnosis and treatment options for their patients. Through intelligent storage and data retrieval systems, we can learn more with the information we have today to improve service to customers or open new revenue streams by leveraging data in new ways.
4) Storage becomes a celebrity – new business needs are pushing storage into the spotlight
As our digital and data-driven universe expands, certain industries are able to reach new levels of innovation by having the capacity to house, organize and instantaneously access information. For example, Hollywood is known for its big budget blockbusters, but it’s the big storage demands required by new formats such as digital, CGI, 3D and high definition that’s impacting not just the bottom line, but studios’ ability to produce these types of movies. Data sets for movies have become so large it’s at the petabyte level. Filmmakers are beginning to trade in film reels for SSDs as just one day’s worth of filming can generate hundreds of terabytes of data. The popularity of these high data-generating formats means studios are looking for new storage technologies that can handle the demand. The healthcare industry may even be facing an even bigger data dilemma than the entertainment business. Take a look at the Institute University of Leipzig, in Germany, which has a major genetic study called LIFE to examine disease in populations. LIFE is cataloging genetic profiles of several thousand patients to pinpoint gene mutations and specific proteins. This process alone generates multiple terabytes of data. Even one 300-bed hospital may generate 30 terabytes of data per year. Those figures will only grow with higher-resolution medical imaging, and new tools or services such as making electronic healthcare records available online.
5) Intervention...The Data Hoarder
In this era of Big Data, more is always better, right? Not so – especially when every byte of data costs money to store and protect. Businesses are turning into data hoarders and spending too much time and money collecting useless or bad data, potentially leading to misguided business decisions. This practice can be changed with simple policy decisions and implementing existing capabilities in technologies that exist in smarter storage, but companies are hesitant to delete any data (and many times duplicate data) due to the fear of needing specific data down the line for business analytics or compliance purposes. Part of the solution starts with eliminating the copies. Nearly 75% of the data that exists today is a copy (IDC). By deleting and disabling redundant information, organizations are investing in data quality and availability for content that matters to the business. Consider the effect of unneeded data, costing money by replicating throughout an organization’s information systems. This outdated data can also potentially be accessed for fraud.
Raising the quality of data is not costly—not getting it right is.
Latest ESG Report: Tivoli Storage Manager proves to be a “turnkey” solution to a range of data protection issues
Sudipta Datta 270004UTF4 firstname.lastname@example.org Tags:  deduplication esg data ibm backup protection tivoli tsm manager storage analyst review 4,366 Visits
Data protection matters! Actually it matters even more with the advent of big data. The unique challenges of managing & protecting big data has forced IT professionals to relook at their data backup & protection policies.And when they were asked what they would characterize as challenges with their organizations’ current data protection processes and technologies, “cost” & “need to reduce back up time” came out to be the major concerns.
Every year ESG conducts a forward looking spending intention survey. They shared a couple of interesting facts that do not surprise but definitely reinstate my thoughts. When organizations were asked what they would consider most important IT priorities over the next 16-18 months, 30 percent responded back saying “improved data backup & recovery”!
ESG analysts Mark Peters and Tony Palmer shared these insights as they took us through the results of their lab testing on Tivoli Storage Manager. If you are not familiar with IBM Tivoli Storage Manager (TSM), it is a scalable client/server software primarily designed for centralized, automated data protection. The goal of the ESG report is to educate IT professionals and provide insight into the advanced data backup technologies such as forever incremental back up, deduplication and why it is so important in current scenario. Click here for the ESG video.
The TSM Lab validation was performed using a combination of hands on testing, audits of IBM customers in live production environments and detailed discussion with IBM experts. The objective is to validate some of the valuable features and functions of the product and show how those can be used to solve real customer problems, and identify any area of improvement.
IBM has continuously invested in TSM platform bringing innovation to data protection and recovery. ESG evaluates how the newer versions of TSM provide a turnkey solution to a range of data protection issues. They found that the two technologies (deduplication and progressive incremental backups) working in tandem were able to achieve 90 percent data reduction after just six incremental backups and 95 percent data reduction after ten backups. Replication function is also fully integrated with deduplication, thus optimizing quicker recovery during disasters. TSM uses policy-based automation along with intelligent move-and-store techniques, helping to reduce data administration efforts. Over all, ESG’s validation rightfully points to the key enhancements to the TSM platform that drive greater scalability, efficiency, and data availability.
Please register and download the detail 23 page ESG Lab Validation Report here.
Opinions are my own
steve wojtowecz 270003B7NV email@example.com Tags:  tsm storage-management storage tpc 4,086 Visits
Healthcare, financial services, media and entertainment, and scientific research among many industries face the challenge of storing and managing the proliferation of data to extract critical business value. As storage needs rise dramatically, storage budgets lag, requiring new innovation and approaches around storing, managing, and protecting Big Data, cloud data, virtualized data and more.
Watson-inspired Storage Takes on the Cosmos: IBM is working on a project with the Institute for Computational Cosmology (ICC) at Durham University in the U.K. and Business Partner OCF to build a storage system to better store and manipulate Big Data for its cosmology research on galaxies. ICC is adopting the same IBM General Parallel File System technology used in the IBM Watson system to store and manage more than one petabyte of data from two significant projects on galaxy formation and the fate of gas outside of galaxies. The enhanced storage system will enable up to 50 researchers, working collaboratively to access and review data simultaneously. It will also help ICC learn to manage data better, storing only essential data and storing it in the right place.
New Storage Platform Delivers More Personalized, Visual Healthcare: A medical archiving solution from IBM Business Partners Avnet Technology Solutions and TeraMedica, Inc. powered by IBM systems, storage and software gives patients and caregivers instant access to critical medical data at the point-of-care. Developed in collaboration with IBM, the medical information management offering can manage up to 10 million medical images, helping health care practitioners provide better patient care with greater efficiency and at reduced costs. The integrated platform allows users to manage and view clinical images originating from different treatments and providers to bring secure, consistent image management and distribution at point-of-care.
Virtualization Consolidates Storage Footprint for Medical Center: Kaweah Delta Health Care District (KDHCD), a general medical and surgical hospital in Visalia, Calif., needed to reduce its operational costs while increasing storage space. To meet these demands, KDHCD tapped IBM's storage systems to create a new storage platform that reallocates resources and saves a significant amount of data space with thin-provisioning technology. Virtualization creates a smaller hardware footprint so the hospital also saved on power and cooling costs. KDHCD now has a consolidated storage environment that provides the scalability, ease-of-management, and security to support critical healthcare data management for the hospital.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  restore vm productivity center backup vsphere volume manager virtual vadp tivoli data unified hypervisor controller san vcenter protection recovery storage cloud 4,061 Visits
We’re getting deep into the planning for our 6th annual PULSE conference (ibm.com/pulse), and I’m getting very excited about the storage content that is being assembled. Again, it will be at the MGM Grand Hotel in Las Vegas, March 3 – 6, 2013.
At our Storage Track Kickoff session, we’ll have some new things to announce and highlight, and we’re close to announcing an exciting keynote speaker.
Following the track kickoff, we’ll have 20 breakout sessions on data protection and storage management topics, covering advances in virtual machine protection, disaster recovery, cloud integration, and a lot more. We’re mixing it up a lot more this year to ensure you get a range of perspectives. We’ll have 21 client speakers discussing their experiences and best practices; plus 8 business and technology partners providing insights into added value approaches to storage management who will be complemented by IBMers sharing the new stuff we’ve been working on.
Among the client speakers will be storage professionals from across the globe representing major banking, healthcare, media, industrial and university organizations. There will also be sessions on a variety of cloud topics, including private cloud storage and backup-as-a-service opportunities.
To follow on a theme mentioned by Steve Mills in his keynote at PULSE 2012, we’ll show how IBM “eats its own cooking”, sharing how IBM’s Office of the CIO transformed its massive storage infrastructure; and how IBM’s Strategic Outsourcing services organization is leveraging our products to more effectively manage their clients’ storage environments.
There will be many cool things to see in the expo center again this year, including offerings from many of our ecosystem partners, and you can roll up your sleeves in the hands-on labs and product training and certification areas.
Have you heard about this year’s PULSE PALOOZA entertainment? We rocked the Grand Garden Arena with Maroon5 in 2012, and will follow that with Carrie Underwood in 2013.
Now’s the time to act. Early bird registration, which saves client attendees $500 off the conference fee, closes December 31st. Go to http://ibm.co/pulseregister and get ready for an outstanding event. I look forward to seeing you there.
"The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Michael Barton 100000SJBP email@example.com Tags:  svc video vmware ve tsm ibm youtube tpc pulse2013 storage cloud tivoli virtualization 1 Comment 4,055 Visits
Clients, analysts, and IBM experts shared their experiences and storage plans on video at Pulse 2013. Take advantage of their perspectives. Use this guide to select videos.
Updated March 18 with new URL for IDC video, and link to more Pulse interviews
Royse Wells, International Paper
Royse Wells, Chief Storage Architect for International Paper discusses
Jeff Jones, UNUM
UNUM Uses Tivoli Storage Manager for Virtual Environments
Jeff Jones is senior infrastructure manager at UNUM, a leading provider of financial protection benefits in the
Klavs Kabell, IT-WIT
Modernizing Backup for Today’s Virtual Environments
Klavs Kabell is a Senior System Consultant at IT-WIT, an IBM Business Partner in
Thomas Bak, Front-safe
Cloud backup and archive using TSM and Frontsafe Portal
Front-safe received the Best Cloud Solution award at the IBM Pulse 2013 conference, and the 2013 IBM Beacon Award for the Best Solution to Optimize the World’s Infrastructure. Learn about the value of enabling backup as a cloud service, using Front-safe Portal software.
Laura DuBois, IDC, Steve Steve Wojtowecz, IBM
IDC Update on Cloud and Storage
Laura DuBois, Program VP of Storage for IDC, and Steve Steve Wojtowecz, IBM VP of Storage and Networking Software discuss client opportunities and requirements for storage clouds and compute clouds. Client cloud storage requirements include backup and archive clouds, file storage clouds, and storage that supports compute clouds.
Chris Dotson, IBM CIO Office
IBM’s storage transformation featuring SmartCloud Virtual Center
Chris Dotson works in IBM’s CIO Office as a Senior IT Architect for Services Transformation. He is guiding IBM’s own storage transformation. As a large enterprise, IBM manages over 100 petabytes of data, growing at 25% per year. Chris discusses block storage virtualization, automated block storage tiering, file cloud storage, and automated block storage management at IBM. He shows how
BJ Klingenberg, IBM Global Technology Services
BJ Klingenberg is a Distinguished Engineer and Enterprise Storage Management lead for IBM. BJ shares his experiences using
Jason Buffington, ESG and Tom Hughes, IBM
ESG Update on Data Protection and Current Shifts in IT
Jason Buffington, ESG Senior Analyst, and Tom Hughes, IBM Worldwide Storage Executive discuss business and technical challenges for data protection. Tom and Jason discuss new solutions and Best Practices for protecting data more efficiently and effectively for today’s cloud, mobile and virtual environments.
Colin Dawson, IBM
Colin Dawson, TSM Server Architect introduces
Jonathan Bryce, OpenStack Foundation founder and Todd Moore, IBM
OpenStack Provides Compute, Storage and Network Interoperability for Clouds
The OpenStack Foundation has gained 170 corporate and over 8,200 individual members since its inception in 2012, making it one of the fastest growing cloud standards. Jonathan Bryce, Executive Director and founder of the OpenStack Foundation, and Todd Moore, IBM Software Group Director of Interoperability and Partnerships discuss the capabilities and opportunities for building cloud solutions using OpenStack to manage compute, storage and network resources.
Deepak Advani, IBM
Optimizing IT Infrastructures for Today’s Workloads
Deepak Advani, General Manager of Tivoli Software discusses top issues and opportunities facing clients as they adopt new breeds of applications to engage with customers and improve operations using mobile devices, cloud and analytics.
More interviews can be found at the Pulse Expo interviews playlist on www.youtube.com/ibmpulse
The opinions expressed herein are sorely mine.
Michael Barton 100000SJBP firstname.lastname@example.org Tags:  analytics ibm svc virtualization information-lifecycle snapshot smartcloud tiering vmworld storage cloud-storage business-transformation 3,955 Visits
As an IBM marketing manager, my job includes writing about storage technology. This post is about more than technology, though. It’s about a new breakthrough capability for managing storage costs and service levels.
I recently met with IBM Distinguished Engineer, Mike Sylvia, who has been working on a Business Transformation project to enable automated right tiering for storage in IBM data centers. Right tiering is the notion that data should be hosted on the optimal storage tier to balance cost and performance requirements.
Mike explained that applications tend to be hosted on top tier storage. When he analyzed actual usage patterns, Mike found most data can be effectively hosted on lower cost storage. Mike’s project put numbers to a problem that is often hidden from view and, until now, nearly impossible to solve.
Hosting data on the wrong storage tier turns out to be a huge efficiency problem. Mike predicts IBM will save $13 million over 3 years in one data center, by periodically moving data to the right tier. During the pilot, users saw their cost for storage drop by 50% per TB on average. This is big.
Like many advancements, IBM’s automated right tiering capability is accomplished by integrating existing technology. Mike Sylvia’s project combines storage virtualization, storage management automation and analytics. Today, IBM offers the technology in a bundled solution called SmartCloud Virtual Storage Center.
How does it work?
Step 1. IBM’s storage virtualization controller collects detailed usage metrics about storage it manages throughout the data center, without impacting application performance.
Step 2. IBM’s Storage Analytics Engine studies usage patterns over time to understand performance requirements.
Step 3: Storage tier recommendations are generated in reports that can be shared with application owners and IT management.
Step 4: Storage virtualization enables online data migration, with no disruption to applications or users.
Repeat: Usage patterns change over time, of course, so right tiering becomes an ongoing process.
Why does it work?
Automated right tiering delivers the efficiency benefits of Information Lifecycle Management without the headaches and hidden costs. Automated right tiering has significant benefits for both data owners and IT leaders, so everyone wins.
For example, application and database owners can gain the following benefits:
Applications can move to top tier storage when they need it, without waiting for a maintenance window.
Average storage costs drop significantly, without a drop in services.
IT leaders benefit, too. For example:
Storage tier decisions are based on analysis of actual usage patterns, not predictions. Storage performance management tasks are eliminated.
Data can quickly and easily be moved back to its original storage tier if requested, without incurring an outage.
IBM automated right tiering works with most storage systems, so deployment is nondisruptive.
The technology that enables automated right tiering has significant additional benefits, such as the ability to eliminate scheduled outages for storage system maintenance.
Problem solved. How has your organization addressed the storage right tiering challenge?
Watch a video of Mike Sylvia describing his automated right tiering project at the IBM Edge conference in June, 2012.
Listen to IBM storage virtualization expert and master inventor, Barry Whyte’s 2-part webcast called, “Storage Virtualization - IBM SVC – Benefits”, in April, 2012
Visit IBM’s Virtualized SAP Demo and other smarter solutions at VMworld August 26-30, 2012 in
IBM has bundled automated right tiering technology into a new solution called SmartCloud Virtual Storage Center, available through IBM sellers and Business Partners.
Delbert Hoobler 1000008PR6 email@example.com Tags:  tivoli storage-blog storage storage-software storage-management tsm flashcopy snapshot exchange 2 Comments 3,860 Visits
IBM just announced that Tivoli Storage Manager for Mail - Data Protection for Exchange 6.1.2 and IBM Tivoli Storage FlashCopy Manager 2.2 now support Microsoft Exchange Server 2010! For more details, read the FlashCopy Manager Version 2.2 announcement or see my blog from yesterday.
There are a few important things to take note of. Microsoft Exchange Server 2010 included some significant changes, a number of which affect backup and restore. For example, under Exchange Server 2010:
With the release of Data Protection for Exchange version 6.1.2 and IBM Tivoli Storage FlashCopy Manager version 2.2 on June 4, 2010, we have implemented support for these changes. Here are details about the TSM functionality for Exchange Server 2010 that will be available on June 4, 2010:
Note: VSS backups to the TSM Server are enabled without the requirement for a TSM for Copy Services or FlashCopy Manager license.
Richard Vining 2700019R2A firstname.lastname@example.org Tags:  productivity vadp vsphere backup protection recovery volume controller cloud manager hypervisor vm san tivoli center restore vcenter storage unified data 3,850 Visits
I recently read an excellent post by Ron Riffe, a fellow IBMer discussing practical recommendations for introducing cloud techniques into a private storage environment – the end goal being to save your company a substantial amount of money while becoming more responsive to the needs of the business. The first of the four steps discussed in the post was to introduce a storage hypervisor – virtualization of your storage infrastructure. It’s a good idea, especially if you have already virtualized some or all of your production server environment with something like VMware.
But there’s more to it than just the efficiency and mobility you get from virtualizing. The customers we talk to are finding new value that rises out of the synergy when both the server and storage environments are virtualized. One example is in the area of data protection. In this post, I’m going to explain the 1+1=3 effect for data protection that comes from combining VMware with a good storage hypervisor.
Let’s start with a quick walk down memory lane. Do you remember what your data protection environment looked like before virtualization? There was a server with an operating system and an application… and that thing had a backup agent on it to capture backup copies and send them someplace (most likely over an IP network) for safe keeping. It worked, but it took a lot of time to deploy and maintain all the agents, a lot of bandwidth to transmit the data, and a lot of disk or tapes to store it all. The topic of data protection has modernized quite a bit since then.
Today, you’re using a server hypervisor (VMware) to efficiently pack several virtual machines onto one physical server – and to make it so you can deploy, move and decommission those VMs pretty much at will. If you are still using the old techniques for data protection (deploying an agent on each individual VM, and then transferring all the backup data for those VMs through the one IP network pipe) on that physical server, you’re probably running into significant performance and application availability problems, and also missing out on some significant savings (if you listen carefully, you can hear your backup environment screaming ‘modernize me, MODERNIZE ME!”).
Fast forward to today. Modernization has come from three different sources – the server hypervisor, the storage hypervisor and the unified recovery manager. The end result is a data protection environment that captures all the data it needs in one coordinated snapshot action, efficiently stores those snapshots, and provides for recovery of just about any slice of data you could want. It’s quite the beautiful thing.
Data capture: VMware has provided a nice set of APIs that allow disk arrays and backup vendors to intelligently drive snapshots of a VMware datastore (for the techies, these are the vStorage API’s for Data Protection, or VADP). The problem is that integration from a disk array to these API’s is a tier-1 kind of service that is found on very few disk arrays today. That’s where a good storage hypervisor comes in. A storage hypervisor will include its own integration between VMware VADP and hardware-assist snapshot and it will plug the control GUI directly into the VMware vCenter management console. That means, regardless of what type of disk array capacity you have chosen to use for your VMware data, the storage hypervisor will be able to do a hardware-assisted snapshot of the VMware datastore (all your VMs at once – sweet!).
Efficient storage: Here’s a scenario we see…
The snapshots can add up, so efficiency is important. For the “online” snapshots, a good storage hypervisor stores only incremental changes, compresses the result and stores it as a thin provisioned volume on lower-tier disk capacity (the new 3TB SAS drives make a nice choice). Notice in this scenario, the administrator is also promoting one of the snapshots each day (say, the midnight snapshot) to an enterprise recovery manager. If you are using IBM’s Tivoli Storage Manager Suite for Unified Recovery, then it will insert deduplication in the list of efficiency techniques being applied to the snapshot (incremental snapshots that are deduplicated, compressed, and stored on lower-tier disk… that’s about as efficient as it gets).
Flexible recovery: Whether the snapshot is online or nearline, the only reason you have it is so that you can recover when something (anything) goes wrong. A good hypervisor / unified recovery manager combination will give VMware administrators the ability to peer inside the snapshot and recover individual files, virtual volumes, or entire VMs. Using the scenario above, your recovery point would be no more than 6 hours old for the last 4 days, and your recovery time would be measured in minutes.
IBM offers one of the worlds best known unified recovery managers and the worlds most widely deployed storage hypervisor. With over 7000 storage hypervisor deployments, we’ve had a lot of opportunity to build some depth. Deep integration with VMware for modernizing your data protection environment is one example. If you are running VMware and haven’t yet modernized data protection, IBM can help. You can learn more at the following links.
Storage hypervisor platform: IBM System Storage SAN Volume Controller (SVC)
Storage hypervisor management, storage service catalog, and self-service provisioning: Tivoli Storage Productivity Center Standard Edition (TPC SE)
Data protection integration: Tivoli Storage FlashCopy Manager and Tivoli Storage Manager Suite for Unified Recovery
Join the conversation! The virtual dialogue on this topic will continue in a live group chat on September 23, 2011 from 12 noon to 1pm Eastern Time. Join some of the Top 20 storage bloggers, key industry analysts and IBM Storage subject matter experts to discuss storage hypervisors and get questions answered about improving your private storage environment.
Live from Pulse2013 recap – Tuesday, March 4, 2013 -- Tuesday Was All About Client Implementation and Experiences!
Stuart Thomson 060000ANDS email@example.com Tags:  and pulse back-up tpc disaster management tsm ibmpulse tivoli storage recovery cloud 3,831 Visits
Following an outstanding PurePalooza party on Monday night that featured a 2-hour performance by 6-time Grammy Award winner Carrie Underwood, you might have expected Tuesday’s General Session to be a little quieter than usual. However, that wasn’t the case at all as the energetic vibe from today’s session picked up right where Monday left off -- helping to quickly shake off the effects of a wild Monday night for many.
This morning’s 90-minute general session was themed “Best Practices in Action” and featured a client panel of IT leaders from AT&T, Equifax, Carolinas Healthcare System and the Port of Cartagena sharing how they are converting opportunities from Cloud, Mobility and Smarter Physical Infrastructures into tangible business outcomes.
The Unified Recovery & Storage Management track picked up on the General session theme with Tuesday’s breakout sessions featuring no fewer than TEN Tivoli Storage clients sharing real-life examples of how they were applying IBM Tivoli Back-Up and Recovery and Storage Management solutions to address a host of complex challenges. While this represents a just a tiny sliver of the valuable content, some of the session take-aways included:
• Irfan Karachiwala (Ph. D), Manager, Enterprise Data Strategy at Kindred Healthcare, a post-acute healthcare provider with over 450 locations in the U.S. has realized improvements in Recovery Point and Recovery Time Objectives by switching from data only backups to VM-based image back-ups using Tivoli Storage Manager for Virtual Environments;
On a day that put IBM clients “front and center”, it was only fitting to close Tuesday with the Tivoli Storage Birds of Feather meeting. This two hour, highly-interactive discussion gave clients the opportunity to get all their questions answered and provide direct feedback to Tivoli Storage Executives, Developers and Product Managers.
Based on the buzz around the Storage breakouts it was clear that the client focus on Tuesday was a hit so a huge thank-you to all the clients who took the time to prepare and share their stories at Pulse2013. Pulse wouldn't be a reality without your contributions!!!
As another Pulse begins to wind down, it’s time to start thinking about IBM Edge2013 in June. The Edge conference will bring us back to Las Vegas to hear more clients describe how they are Optimizing Storage and IT. If you weren’t able to join the 8000 of us at Pulse2013, start making plans to attend Edge by finding out everything you need to know (including the early-bird discount available through the end of April) at the IBM Edge2013 Conference website.