For those of our customers who have upgraded to TPC v5.1 or later, this below post will validate their smart decision to embrace the web-based graphical user interface (GUI). For the rest, go on and read further to understand what you are missing...
Back in June 2012, Tivoli Storage Productivity Center (TPC) v5.1 was announced with two user interfaces for managing the storage infrastructure: the stand-alone GUI and the web-based GUI. The web-based GUI provides a quick access to pages that you can use to monitor the condition, capacity, and relationships of the resources within your storage environment.
The web-based GUI is key to access critical storage information without needing to get glued to your administrator console in your data center premises. The web-based GUI helps you to achieve the following: - view and acknowledge the status of resources that are monitored; these resources include storage systems, servers, hypervisors, fabrics, and switches - view summary and detailed information about resources, including properties, usage, capacity, and several key performance metrics - view the relationships and storage mapping among resources - view the connectivity between host connections, servers, hypervisors, virtual machines, storage systems, and the fabrics - identify potential problems and troubleshoot existing problems in a storage environment - pull reporting, along with drag-and-drop approach to design custom reports that contain detailed information about the properties and performance of monitored resources. Note that the reporting interface also includes a set of predefined reports that provides quick access to preformatted data about resources.
The web-based GUI also allows you to start the element managers for various storage systems and switches. To learn more about managing resources in the web-based GUI, click here to access the TPC information center.
In its latest edition TPC v5.1.1., the web-based GUI is enhanced to provide detailed view of the data path. The data path view shows the connectivity between host connections, servers, hypervisors, virtual machines, storage systems, and the fabrics through which they share storage. You can use this view to monitor status and pinpoint problem areas in selected data paths. This view includes both graphical and tabular representations of the top-level resources in a data path.
You can use the data path view to: - view the path of data that is shared between resources - view the fabrics through which resources in a data path are communicating - view the propagated status of the top-level resources that are in a data path - view the status of the internal resources for top-level resources that are in a data path - customize the appearance of the data path view to suit the needs of your environment - export the data paths view as an image or CSV file
For more information about viewing data paths in the web-based GUI, click here.
Data centers across enterprises are witnessing unprecedented data growth, which translates to increased costs and management complexity. One of the leading analyst groups, the Evaluator Group, has analyzed the storage resource management (SRM) software space and created a detailed insight report to outline how the SRM segment is evolving, and why it is important that storage managers need to have a storage strategy aided with a comprehensive tool such as IBM Tivoli Storage Productivity Center to better manage future challenges.
What's more? In addition to simplified management of capacity, performance and provisioning of storage infrastructure, Tivoli Storage Productivity Center V5.1 also enables comprehensive storage replication management.
In many enterprises today, storage replication is riddled with manual errors and/or poorly written in-house scripts that often provide no view of overall copy environment status. Additionally, setup and ongoing management of large-scale copy services is increasingly becoming cumbersome.
To learn more about TPC's advanced replication management capabilities, tune into the upcoming webinar "Simplified storage replication for high data availability" through Tivoli User Community on Nov 13, 2012 at 11AM ET. Click here to register for this event.
In many organizations today, storage replication is riddled with manual errors and/or poorly written in-house scripts that often provide no view of overall copy environment status. Additionally, setup and ongoing management of large-scale copy services is increasingly becoming cumbersome. Tivoli Storage Productivity Center (TPC) enables simplified yet comprehensive control over replication process. With the release of TPC v5.1 in June 2012, the replication management capabilities are now well integrated into the TPC core license.
TPC extends support for FlashCopy, Metro Mirror, Global Mirror, and Metro Global Mirror sessions. While providing central view of the replication environment, TPC provides end-to-end management and tracking of copy services, including both planned and unplanned disaster recovery procedures. In addition, TPC enables practice volume sessions that allow storage managers to test their DR environment without interfering with daily DR operations.
The following new capabilities were added to TPC v5.1:
Failover operations that are managed by other applications Applications such as the IBM Series i Toolkit, VMware Site Recovery Manager, and Veritas Cluster Server manage failover operations for certain session types and storage systems. If an application completes a failover operation for a session, the ‘Severe’ status is displayed for the session. An error message is also generated for the role pairs for which the failover occurred.
Additional support for space-efficient volumes in remote copy sessions You can use extent space-efficient volumes as copy set volumes for the following IBM System Storage® DS8000® session types: • FlashCopy® (System Storage DS8000 6.2 or later) • Metro Mirror (System Storage DS8000 6.3 or later) • Global Mirror or Metro Global Mirror (System Storage DS8000 6.3 or later)
Reflash After Recover option for Global Mirror Failover/Failback with Practice sessions You can use the Reflash After Recover option with System Storage DS8000 version 4.2 or later. Use this option to create a FlashCopy replication between the I2 and J2 volumes after the recovery of a Global Mirror Failover/Failback with Practice session. If you do not use this option, a FlashCopy replication is created only between the I2 and H2 volumes.
No Copy option for Global Mirror with Practice and Metro Global Mirror with Practice sessions You can use the No Copy option with System Storage DS8000 version 4.2 or later. Use this option if you do not want the hardware to write the background copy until the source track is written to. Data is not copied to the I2 volume until the blocks or tracks of the H2 volume are modified.
Recovery Point Objective Alerts option for Global Mirror sessions You can use the Recovery Point Objective Alerts option with IBM TotalStorage Enterprise Storage Server® Model 800, System Storage DS8000, and System Storage DS6000™. Use this option to specify the length of time that you want to set for the recovery point objective (RPO) thresholds. The values determine whether a Warning or Severe alert is generated when the RPO threshold is exceeded for a role pair. The RPO represents the length of time in seconds of data exposure that is acceptable in the event of a disaster.
Posted on behalf of Martine Wedlake, Ph.D., Storage UI Architect, IBM Software Group
From talking with customers, we know that it's really important that you find what you need quickly and easily. The original navigation structure for IBM Tivoli Storage Productivity Center (TPC) was built around a resource explorer model -- very much like a windows file explorer. This unfortunately, means you can have a ton of entries in the navigation that you'll need to hunt through to find anything.
For example, I took a look at one of our TPC deployments in the lab and started counting the number of clickable entries in the navigation -- I stopped counting once I got to 1000. Based on how far I got through, I'd say that there were about 1500 or so. I should point out, that this is not a particularly large deployment -- 25 storage systems, 7 servers, 4 hypervisors, 5 fabrics with 46 switches. You can expect a much larger set of entries on larger deployments.
So, we knew pretty early on that we needed to improve the navigation. To do that we switched from a resource explorer view to a by-category view. This allowed us to dramatically simplify the navigation to only 13 high-level categories and no more than two levels deep. No more hunting and pecking to find what you want!
We also made it possible to directly link to the things you want without having to go through the navigation at all. For example, for an SVC storage system's detail page you can link directly to the set of backend controllers in your environment consuming the storage. You don't need to go back out to the navigation menu and then try to track down the servers all over again. Here’s a picture:
The overall concept is whenever you see something interesting; you should be able to drill-down into it. In addition to the navigation of the product, we've spent considerable effort to make the content of the user interface easier and more intuitive; and to make it consistent with the work we had done previously on the Storwize V7000 and SAN Volume Controller user interfaces – if you've seen one of our GUIs, you'll be able to get up to speed quickly on any of the others.
To that end, we borrowed significantly from the Storwize V7000 GUI, for example: configurable tables, visual theme, embedded help system, charting and general icons. Here’s a screenshot of Storwize V7000 GUI to help show the similarities:
Beyond these cosmetic enhancements, we spent a lot of time working with our stakeholders to deliver the content in an intuitive and simplified way. Knowing what to put on the pages and how to simplify the pages involved a dramatic shift in our development process. But, before I move on to that, I really need to highlight the improvements made with reporting.
In this release, we've embraced the Tivoli Common Reporting which includes IBM Cognos. This is a huge step forward for improving your ability to view and create reports for TPC.
To start with, you will not need to know SQL or database schema to create reports -- the drag and drop interface allows you to simply incorporate the data columns you wish to show and Cognos already understands the relationships between the entities. For example, let's say you want to show the volumes connected to a given server. In Cognos, you simply add columns for the Server Name and the Storage Volume into the canvas. The tool already understands the relationships between these entities and will automatically join the data appropriately to show which volumes are mapped to which servers.
Of course, we also provide upwards of 45 or so reports out of the box for those who don't wish to create reports themselves. Another neat feature is that the reports included with TPC can be copied and edited by the built-in editor tool within Cognos, so you can take one of our reports and modify it to your liking. Here are some of the reports that are included with TPC:
Working with our customers is part of the most rewarding aspects of my work here at IBM. For this release of TPC, we employed a radically different development model from what we have done in the past. We like many in the industry used to develop in a methodology called waterfall where requirements are captured and approved at the beginning of the project which leads to a phase of high-level and low-level designs, leading eventually to development, test and delivery to customer. For this release of TPC, we wanted to include customer input throughout the development cycle -- not just at the beginning when collecting requirements.
As such, we hosted 17 sessions with 34 customers, 7 business partners and 4 internal customers spread throughout the development cycle (held monthly). We also sent developers and GUI designers out into the field to talk directly with 7 customers. From these combined sessions, we captured 261 distinct requirements and were able to contain 188 of them within the first phase of development, with 47 being deferred into the second phase. That means that 72% of the requirements are already implemented in the first phase alone, and 90% of the requirements are expected to be implemented by the second phase. This is very impressive, as compared against traditional waterfall development.
The best part of an iterative, agile approach is that we are constantly evaluating the effectiveness of the solutions. We learn right away if something isn't quite hitting the mark, and have plenty of time to make changes to improve it.
As a quick plug, it is not too late to participate in our Early Adopter Program for the next phase of TPC. Please feel free to contact me directly (email@example.com) if you would like to participate. We would love to work with you to make TPC even better.
IBM is looking for customers and business partners who are interested in participating in an Early Access Program (EAP)/Beta Program for an upcoming release of FlashCopy Manager, Data Protection for SQL, and Data Protection for Exchange. If you would like to nominate your organization to participate in this EAP/Beta, please send an email to:
Mary Anne Filosa (firstname.lastname@example.org)
and be sure to include your organization's name. Once your email is received you will be sent instructions on signing off on the EAP/Beta legal form online and when that signoff has been completed, you will be sent a link to the program's nomination site. We encourage you to respond quickly if you are interested as the program begins in mid December.
In my earlier post – Eliminate management inefficiencies and complexities associated with your cloud foray – I briefly touched upon ‘storage tiering reports’. Now these reports are available as part of Tivoli Storage Productivity Center v4.2.2 announcement this week. In one of the latest Storage Wave studies by The InfoPro, it points out to ‘Tiered Storage Build Out’ as one of the top 3 initiatives among storage managers. Yet in a complex, virtualized environment, having complete visibility and control over storage tiering can be challenging.
Tivoli Storage Productivity Center provides capabilities for reporting on storage tiering activity to support data placement and to optimize resource utilization in a virtualized environment. The storage tiering reports leverage the estimated capability and actual performance data for IBM SAN Volume Controller and IBM Storwize V7000, and offers storage administrators with key insights such as: • Are the backend subsystems optimally utilized • Does moving a certain workload to low cost storage impact service levels • How to level out performance in a certain pool
• Which data groups can be moved to an alternate tier of storage
Image: Sample tiering distribution report
By having a comprehensive view of performance stress on the hardware, storage tiering reports enable administrators to make proactive decisions about volume placement, thus averting any downtime or impact on the data availability.
Tivoli Storage Productivity Center enables storage administrators to optimize disk configurations, such as by progressively and dynamically changing storage tier percentage distributions between high-end, mid-range and low-end storage. For example, an initial 70/30/0 split can be changed to a new distribution of 30/50/20, enabling the organization to realize the corresponding storage infrastructure savings.
To read more about Tivoli Storage Productivity Center, click here.
What’s new in Tivoli Storage Productivity Center?
IBM announces Tivoli Storage Productivity Center Select - a comprehensive storage management software that offers advanced provisioning, performance management, capacity optimization and reporting capabilities. Select includes all key capabilities of Basic Edition, Disk and Data modules of Tivoli Storage Productivity Center family, and is conveniently packaged for ‘per enclosure’ licensing.
Select complements Tivoli Storage Productivity Center for Disk Select (formerly Disk Midrange Edition) and is ideal for management of IBM XIV, Storwize V7000, DS3000, DS4000, DS5000 as stand-alone devices or when attached to an SAN Volume Controller. Select also supports any device that is attached to Storwize V7000.
IT managers are broadly exploiting virtual server infrastructures -- hypervisors -- to improve efficiency, provide for transparent mobility, and give common manageability and capabilities regardless of type of server hardware being used. These same robust benefits are now available for virtual storage infrastructures with the IBM storage hypervisor (System Storage SAN Volume Controller and its management console the Tivoli Storage Productivity Center).
Listen to the webcast to understand how the IBM storage hypervisor can be a complimentary next step in the overall IT environment virtualization process.
I wanted to let everyone know that IBM Tivoli Storage FlashCopy Manager for Windows Version 2.2.1 was just released!
In June of this year, I blogged about IBM Tivoli Storage FlashCopy Manager version 2.2.0. I talked about how FlashCopy Manager 2.2 provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows 2.2.0 added new support for Microsoft Exchange Server 2010 and Microsoft SQL Server 2008 R2 as well as other enhanced performance and functionality.
We continue to add more functions and features to IBM Tivoli Storage FlashCopy Manager. This past Friday (December 10th, 2010), IBM released IBM Tivoli Storage FlashCopy Manager Version 2.2.1 with the following changes:
Updates Applicable to All Platforms
Support for SVC 6.1
Support for IBM Storwize® V7000
Updates Applicable to all FlashCopy Manager components that run on AIX, Linux, and Solaris
Support for AIX 7.1 in non-SAP environments
Support for Oracle ASM on Solaris and Linux
Support for SVC and IBM Storwize® V7000 Space Efficient target volumes for FlashCopy Manager Cloning operations on AIX, Linux, and Solaris
Updates Applicable to the FlashCopy Manager for Exchange Component
Support for VSS backups to a TSM Server without needing a TSM for Copy Services or FCM license
Support for SVC and DS8000 family devices in a VMware guest OS environment
Improved support for VSS backups in clustered and offload environments
Updates Applicable to the FlashCopy Manager for SQL Component
Improvements to Query & Backup Performance in Environments with Large Numbers of SQL Servers
Support for VSS backups to a TSM Server without needing a TSM for Copy Services or FCM license
Support for SVC and DS8000 family devices in a VMware guest OS environment
Improved support for VSS backups in clustered and offload environments
New Video: ManTech International provides the United States Department of State reduce their backup and recovery time using Tivoli Storage Manager FastBack
Peter Stark Executive director ofManTechInternational which is under contract to the United States Department of State to provide global IT modernization, of all State Department information systems around the world. The department operates two physically separate networks around the world for classified and unclassified data with up to 3,000 servers spread throughout the world. By using the Tivoli Storage Manager FastBackSolution they are able to get eight snapshots a day from the Exchange server each of which is only taking two or three minutes to run and can recover objects in 5 or 10 minutes whereas previously it was not feasible with a 46 hour backup and recovery time.
VCU Medical Center is one of the leading academic medical centers in the United States and the only academic medical center in central Virginia, offering state-of-the art care in more than 200 specialty areas along with Level 1 trauma care.
Business need: For VCU Health System, technology provides the foundation for transforming clinical services and delivering patient care. However, with a heterogeneous storage infrastructure and no single user interface, the team’s three storage engineers faced significant hurdles in managing growing data volumes and recovering data quickly when needed.
Solution: Working with IBM, the health system implemented a virtualized, scalable and high-performance storage infrastructure that improves service levels, reduces costs, mitigates risks and supports an increasing amount of data (growing at more than 20 percent annually).
Benefits: Reduced data recovery time; Process of migrating data has been shortened with greater success probability; Standardization and consolidation of storage systems has reduced the storage footprint and decreased data center temperatures from 43 to 68 degrees; lower cooling and energy cost; Reduced storage spending
“With Tivoli Storage Manager, we can set multiple recovery point objectives and the XIV allows us to keep multiple snapshots of the data without impacting performance. So we can have copies and copies and copies of the data where we couldn’t before.” —Greg Johnson, CTO and Director, Technology & Engineering Services, VCU Health System
Read the complete case study for more details on how VCU Medical Center worked with IBM to gain uninterrupted data access.
More success stories of other customer implementations of IBM technologies can be foundhere.
New Video: Tivoli Storage Manager runs a smarter Data Center Ohio Health has eight member hospitals, nine affiliate hospitals and numerous out-patient facilities throughout Ohio. Their systems run on pSeries hardware for many of the clinical systems with an AIX operating system. They have, between two primary and secondary sites, where they run systems at either site. Backups are critical in their clinical environment because it affects patient care. They use Tivoli Storage Manager for their backup environment. Tivoli Storage Manager writes directly to the primary site and it gets replicated to the second data center. Using a disk-based backup method, they have shaved seven hours off admin processing time because they don't have to write off-site copies.
Watch the video and hear why Ohio Health loves using Tivoli Storage Manager
A new supercomputer at the University of Kentucky has placed it in the top 10 of public universities for compute power.
According to UK President Lee T. Todd Jr, "This supercomputer will allow our world-class researchers to discover new solutions to the complex problems facing the Commonwealth, the nation, and the world."
This new high-performance compute cluster comes with 200 terabytes of usable disk storage. This important data is protected by Tivoli Storage Manager (TSM) and Heirarchical Storage Manager (HSM) that is connected to the UK central backup system.
I'd like to congratulate UK on making it into the top 10 public university supercomputers. Lastly, I'd like thank them for selecting TSM to protect this critical infrastructure.
You can find the full report on the UK supercomputer at HPCwire.
Pulse 2011 Call for Speakers Opens Wednesday, September 22!
Boy oh boy, time sure flies when you're having fun. It seems like I was just at Pulse 2010 in Las Vegas, being a roving reporter, capturing customer, business partner and Subject Matter Expert Videos. It's actually been about nine months and once again it's time to ramp up for Pulse 2011.
Pulse will return to the MGM Grand in Las Vegas February 27 through March 2, 2011. Just like Pulse 2010, we're looking for client speakers to share their success stories and speak in the different track sessions. Do you have a storage success story? What are you doing to make your organization smarter when it comes to storing and backing up your data? How do you gain visibility across your infrastructure, including your storage environment? Are you in control of your data, no matter where it resides? How have you leveraged automation technologies to manage the explosion of data, and the need for instant accessibility? We want to hear from you! What software, hardware and services are you utilizing to deliver better services within your organization, to your internal and external customers? Come share your story of how you're using IBM Storage as a part of your organization's Integrated Service Management implementation.
At Pulse 2010, there were over 300 client speakers and if you weren't a speaker then, you should definitely submit your proposal for Pulse 2011. Check out the benefits of being a client speaker!
Client Speaker Benefits: Pulse 2011 client speakers will receive complimentary registration to the conference and the first 50 to submit a proposal will receive a FREE hotel accommodations upgrade* to a Celebrity Spa Suite at the MGM Grand if the proposal is accepted! *The speaker pays for the basic room and will be awarded the upgrade if they submit one of the first 50 papers to be accepted.
Client Speaker Benefits include:
One full conference pass ($1995 value)
Use of our exclusive Client Speaker VIP Lounge
Networking opportunities with over 6000 industry experts, press, and analysts
Eligibility for the Maximo Best Practices Award (EAM papers only).
Read Jennifer Dennis' blog Pulse 2011 Call for Speakers - Opens 9/22 @ibm.com/pulse! for details on submitting your proposal. Don't delay, get prepared to submit your proposal right away, those 50 upgrades will be going fast!!! Here are some customer speaker interviews I did during Pulse 2010, hopefully this will give you an idea of what you can submit for your proposal.
Juniper Networks recently published a solution brief regarding the performance boost you get from using TSM Fastback in concert with their WAN optimization (WXC). The value proposition is pretty straightforward: reduced backup times and reduced WAN bandwidth and cost. You can read the full details in the report, but here are a few snippets worth noting:
Conceptual view of the bandwidth savings ...
Savings of backing up 92GB over a 155Mbps link with 100 ms latency:
These savings are above and beyond those you already get with using TSM Fastback (taken from solution brief):
The IBM Tivoli Storage Manager FastBack provides an extensive and cost-effective data protection and recovery solution specifically designed to help remote offices maintain operations, regardless of the type of data loss.
The FastBack Client uses next-generation, block-level technology to capture new and changed data on the application servers as frequently as needed—up to true CDP—with almost no performance impact on the applications. This provides for much more granular recovery, leaving much less data at risk of loss than traditional once-a-night backup solutions.
The FastBack Server provides the management, policy engine, and local repository for the protected data. The server includes near-instant restore capabilities, enabling critical applications to resume within seconds following almost any type of data loss. The server also initiates “selective replication” jobs to send copies of selected data over the WAN to another location for disaster recovery and backup consolidation capabilities.
The FastBack DR Server aggregates the backup data from multiple remote offices—enabling extremely fast recovery of remote office workloads should an office go offline for any reason. The FastBack DR Server also can be used to enhance protection of business-critical application servers in the data center, and it integrates easily with central data protection and retention solutions such as IBM’s Tivoli Storage Manager.
TSM Fastback is a solution that has seen strong adoption from customers with remote offices ... If backup times or bandwidth usage over a WAN are a concern, I suggest you look into the WXC offering from Juniper Networks in concert with TSM Fastback.
At the recent Gartner IOM 2010 conference in Orlando, Florida, I had the good fortune of listening to a series of interesting topics and meeting some really smart people. As one might have guessed, the bulk of the sessions focused on virtualization and cloud topics. But the one topic that piqued my interested was unrelated to virtualization and cloud - it was deduplication and was hosted by Dave Russell.
The intent of the session was to bring forward a some customer examples that were deploying deduplication technologies in their backup and recovery solutions. Most of you that read this blog know that deduplication and data reduction have been a hot topic in the industry. And as you likely know, almost every major vendor out there offers some form of deduplication with its associated benefits.
This session provided us two customers who were willing to talk about their experiences with deduplication and the benefits they've received. One customer is using CommVault and the other is using IBM Tivoli Storage Manager v6 (TSM). While both customers showcased the quantified benefits from deduplication, the presentation from the TSM customer went beyond just the benefits of deduplication. The TSM customer revealed their quantified benefits and also identified some of the best practices they developed regarding deduplication.
This particular TSM customer is a large producer of natural gas in the U.S. The customers environment has TSM managing about 1.3 petabytes of data from over 1500+ nodes. Overall, their approach to managing backup storage is do it as efficiently as possible and to reduce the overall amount of backup data under management.
Prior to leveraging TSM deduplication, this customer began with "incremental forever" and compression. Once TSM v6 was released, they adopted deduplication at the server and client in concert with the other data reduction features provided by TSM.
As they began evaluating their use of deduplication, they had to deal with demands from their internal customers - DBA and Exchange admins like full backups etc. Furthermore, they had to consider how their rate of data change, evaluate retention policies, and ensure that their restore requirements weren't negatively impacted by the use of deduplication.
After significant testing and planning, the customer decided that they would initially deploy deduplication for their Oracle databases and Windows OS and system state backups. The results of using TSM deduplication were impressive ...
Oracle deduplication results - 75% reduction of Oracle backup data after deduplication. This was on 3.8TB of physical space on disk and about 15 TBs of data on tape.
And their results on Windows OS and System State were a whopping 94% ... taking them from 172GB of managed data down to 11.4 GB. In this scenario, the customer leveraged TSM 6.2 client- or source-side deduplication.
Overall, very impressive results. By leveraging the data reduction features within TSM, the customer was able to save by using less tapes library cells, tape drives, and disks.
In the end, the customer stated that TSM data reduction (with deduplication) helped them meet their objectives - efficiently reduce data under management. Furthermore, it allows them to reduce their overall HW costs and meet or improve restore requirements. The last comment the customer made before closing the session was that with all the various TSM data reduction capabilities in production, their job had ultimately gotten simpler now that their environment was running more efficiently ...
This is a fantastic story that I really enjoy sharing. If you are a TSM customer and have benefited from its data reduction technologies, then please give me a shout as I would like to hear your story as well.
I wanted to share some information about an article that we just published with regards to backing up Exchange Server 2010.
Along with all the other new features of Exchange Server 2010, Microsoft introduced Database Availability Groups (DAGs). DAGs are part of the large focus that Microsoft put on High Availability and Site Resilience within Exchange Server 2010. DAGs allow you to have passive database copies (aka "replicas") that can serve as hot standbys for protection against machine failures, database failures, network failures, viruses, or other issues that may cause an access problem to a database. DAGs are similar in function to Exchange Server 2007 Cluster Continuous Replication (CCR) replicas. However, they extend the capabilities even further. One of the key benefits that customers get when they use DAGs in their enterprise is the ability to completely offload backups from their production Exchange Servers. That means they can run all of their backups from a database copy instead of the production database so as not to impact their production Exchange servers. This enables the production Exchange Servers to spend their resources on doing what they know best, i.e. handling email and facilitating collaboration.
We just published an article (which includes a sample script) to help you automate backing up your Exchange Server 2010 DAG databases. We know that you will find this quite helpful in setting up your backup strategy:
There are a few important things to take note of. Microsoft Exchange Server 2010 included some significant changes, a number of which affect backup and restore. For example, under Exchange Server 2010:
Legacy-style backups (aka "streaming" backups) are no longer supported by Microsoft
VSS-style backups are the only supported online backup method
Exchange storage groups were removed completely
The Recovery Database replaced the Recovery Storage Group (RSG)
Database Availability Groups (DAGs) have replaced LCR, CCR, and SCR replication
Single Copy Clustering (SCC) is no longer available
With the release of Data Protection for Exchange version 6.1.2 and IBM Tivoli Storage FlashCopy Manager version 2.2 on June 4, 2010, we have implemented support for these changes. Here are details about the TSM functionality for Exchange Server 2010 that will be available on June 4, 2010:
Full Exchange Server 2010 support
Command-line Interface (CLI)
Graphical User Interface (GUI)
Database Availability Group (DAG) support
Query Exchange Information
Shows all databases with various attributes
Shows VSS component information
Full, Copy, Incremental, Differential
Back up from production database
Back up from a passive database copy (replica)
Back up to TSM Server
Back up to LOCAL snapshot
Offloaded backup to TSM
Shows all backups with their attributes
Restore from TSM Server
Restore into production database
Restore into "Recovery Database"
VSS Instant Restore from LOCAL snapshot
VSS Fast Restore from LOCAL snapshot
Mailbox Restore (IMR) and Item-Level Recovery
FlashCopy Manager and MMC Integration
Note: VSS backups to the TSM Server are enabled without the requirement for a TSM for Copy Services or FlashCopy Manager license.
Finally, a number of you were a part of the FlashCopy Manager 2.2 Beta Program and/or the Data Protection for Exchange 18.104.22.168 "Limited Availability" program, so thank you for helping us make it a great release!
In December of last year, I blogged about IBM Tivoli Storage FlashCopy Manager for Windows version 2.1. I talked about how FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I also discussed how FlashCopy Manager on Windows supports Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS) and how it integrates into your enterprise whether you have Tivoli Storage Manager or not. So, if you haven't read my previous blog about FlashCopy Manager on Windows, why not check that out first, then come back to learn more about the new features we just announced!
This Friday, June 4, 2010, IBM will release IBM Tivoli Storage FlashCopy Manager Version 2.2. Some of the exciting new Windows features in this release include:
Support for Microsoft Exchange Server 2010
Support for Microsoft SQL Server 2008 R2
Performance improvements for Microsoft Exchange Server mailbox history and mailbox restore operations
Performance improvements for large Microsoft SQL Server environments
Enhanced integration with SAN Volume Controller via enablement of VSS Instant Restore when there are multiple backup generations on space efficient target volumes
Did you know? FlashCopy Manager also supports UNIX platforms! Some of the exciting new UNIX features included in FlashCopy Manager Version 2.2 are:
Support for Linux x64 and Solaris SPARC operating systems
Database cloning support
Enhanced integration with SAN Volume Controller via automatic detection of deleted snapshots
A customizable agent that enables you to back up applications not directly supported by the product
IBM Tivoli Storage Productivity Center (TPC) for Disk Midrange Edition V4.1 is now Available! Announced last month, TPC for Disk Midrange Edition has been designed to help reduce the complexity of managing midrange SAN environments that include IBM System Storage DS3000, DS4000, DS5000, SAN Volume Controller (SVC) Entry Edition and IBM Virtual Disk System devices by allowing administrators to configure, manage, and monitor performance of their entire midrange storage infrastructure from a single console. This new offering provides customers the equivalent features and functions of Tivoli Storage Productivity Center for Disk enterprise offering at a fraction of the cost... up to 80% off list price.
TPC for Disk Midrange Edition is part of the IBM Tivoli Storage Productivity Center V4.1 suite of integrated storage infrastructure management products that are designed to help you manage almost every point of your storage network, between the hosts through the fabric and to the physical disks in a multi-site enterprise. It can help simplify and automate the management of storage data and the networks to which they connect.
Utilizing a new Storage Management Initiative - Specification (SMI-S) Common Information Model (CIM) agent, Tivoli Storage Productivity Center for Disk Midrange Edition can provide over 40 difference reports and performance metrics including:
Controller input and output rate (read/write/total)
Administrators can monitor and analyze performance statistics for these storage systems down to five minute intervals. The performance data can be viewed in real time in the topology viewer, stored for historical reporting, or used to generate timely alerts by monitoring thresholds for various device parameters.
Tivoli Storage Productivity Center for Disk Midrange Edition is set apart from IBM Tivoli Storage Productivity Center for Disk because it is:
Designed to support IBM's entry level and midrange System Storage products (DS3000, DS4000, DS5000, SAN Volume Controller Entry Edition and Virtual Disk System).
During Pulse 2010 in Las Vegas, I interviewed Alistair Mackenzie from Silverstring, an IBM Business Partner. Just last week Silverstring launched TSMagic; helping you understand your TSM estate like never before... See the news article for more information on TSMagic
Checkout the live video interview with Alistair:
If you were unable to attend the live Pulse 2010 event in Las Vegas, you can still attend the Virtual event - register today. You can also check out the Pulse Comes To You Web site to see if there will be an event in a city near you.
In the second half of 2009 the International Technology Group (ITG) was contracted to do a detailed analysis of IBM and competitive storage offerings for SAP to determine a three year total cost of ownership (TCO) for each product included in the comparison. ITG developed two comparisons one for Large Enterprist accounts and a second for Midmarket accounts and chose approppriate competitive offerings for the comparisons. For the Large Enterprise accounts ITG includes: EMC V-Max systems vs. IBM DS8000 Systems and HP XP2400 vs. IBM XIV Systems. For the Midmarket accounts ITG includes: HP Enterprise Virtual Array (EVA) vs. IBM DS5000 Systems and HP EVA vs. IBM XIV Systems. ITG developed three year TCO comparisons and provided IBM an Executive summary and Detailed analysis report that can be share with customers.
Read the outcome of the analysis:
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Midsize Installations - Management Brief
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Executive Summary
Title: Value Proposition for IBM System Storage Cost/Benefit Case for SAP Deployment in Enterprise Installations - Management Brief
ITG also participated in a Webcast that is available for replay discussing the results of their studies of comparative disk systems cost for SAP environments in large and midsized organizations.
Yesterday I interviewed Greg Johnson, CTO and Director of Technology and Engineering Services for Virginia Commonwealth University Health System (VCUHS). Greg presented at Pulseon Tuesday and he discussed how VCUHS is transforming IT in a healthcare environment focussing on their storage solutions and backup and recovery solutions. If you weren't able to attend Greg's session on Tuesday at 2:00 - 3:00 pm in the Conference Center room 120, watch the video below and you'll see a high-level recap of what he presented.
Once again, this was a live interview from outside the expo hall in the MGM and the McCarran International Airport, sure is one of the busiest airports in the world... maybe I should have done my interviews inside the conference. I enjoyed the fresh air and the airplanes in the background just adds to the beauty of the live interview. I still think that Journalism is a field that I will not be pursuing... hopefully my interview skills will improve before Pulse 2011, which will be Feb. 27 - Mar. 3 2011.
I had the pleasure of interviewing one of our client speakers, Brian Perlstein from Oakwood Healthcare System. Brian will be presenting the Oakwood Healthcare System's Virtualization story on Wednesday Feb. 24th at 9:30 to 10:30 am in the Conference Center room 121. Hope to see you there!
Today I did several live video interviews. Let me be honest with you, it is clear that I wasn't meant to be in the journalism profession, uhm, now that is the truth!
I met many IBM clients and business partners through out this week at Pulse and today I did an interview with Roger Finney from Logicalis which is an IBM Business Partner. We did this interview right outside the expo hall at the MGM Grand hotel, so you can hear the airplanes going over from McCarran International Airport.
Logicalis has been an IBM Business Partner for over 14 years and they are both Software Value Plus authorized and Tivoli Accredited. In this video, I ask Roger to provide some details about how Logicalis has helped their customers with their storage management needs.
Pulse kicked-off today with the Business Partner Summit. I attended the IBM Information Archive session where the partners attending and I learned about the Archiving Ecosystem and how IBM Infromation Archive helps: reduce costs, improve productivity / effeciency and reduce risks. Information Archive is a simplified, cloud-ready smart business system.
Some important questions to help understand whether or not an archiving solution is needed include:
What are you doing to better manage the explosive growth of email + attachments on your mail servers?
What are you doing to better manage other types of content such as files, sharepoint data, social networking, images, videos, etc.?
Have you ever had to respond to a legal discovery request?
What is your litigation and/or compliance risk (how many lawsuits and/or industry regulations are you prepared to defend) and how are you managing paper and electronic info...
The partners in the session had a lot of great comments and questions. I met a few of the partners... Bill Mansfield from Logicalis and Mike Wiseley and Bruce Wolff from Agilysys. Below is a picture of Mike and Bruce.
If you are a partner and you were unable to attend the IBM Information Archive session (or you attended but want to hear more) you can attend the other sessions that are scheduled at Pulse: A technical look inside IBM's next generation archive appliance Tuesday 3:30-4:30pm @ RM 120 IBM's Smart Archive Strategy Simplifying Information Retention Tuesday 5:00-6:00pm @ RM 120 Birds of a Feather: IBM Smart Archive Strategy Discussion Tuesday 6:00-7:00pm @ RM 120
Next stop for me is the Pulse 2010 Business Partner Summit General Session!
on my previous blog i've discussed some of the viable approaches to data protection with virtual machines, before i dwell into the pros and cons of each approach i'd like to discuss the fundamental differences between file level and block level backup (and solicit your input :-) ).
Encapsulation is one of the basic rules for software design, simply put, it's the computer geek's equivalent of the famous "Don't ask, don't tell" policy. The idea is pretty simple, let's assume our File System is component A and our Disk System is component B. Component A and B publish a public interface that others can use, but they hide their internal mechanisms from the other components. This enables us to do some nifty tricks, such as RAID, as far as the file system is concerned it is working with a "regular disk", it is unaware of the fact that our disk system had actually taken the 100GB disk space that we defined and partitioned it into multiple strips that are actually located across 5 different disks in order to provide it (the FS) with better performance and hardware fault tolerant. There are other ways where this principle is used but you have to agree that it comes in pretty handy.
But, why do i even mention "encapsulation", and how is that relevant to File VS Block level backups?
The point i am trying to make is that the Disk level is not aware of the "file contents"and the File System is not aware of the "disk layout", this actually dictates the pros and cons of those two very different approaches to data protection.
With file level backups it's really easy to define which files you want to protect, than when the time comes, someone has to access the files and move the data they contain to some sort of data repository, in order to do that you must deal with issues such as:
- Open files
- Interdependencies between multiple files
- Identifying which (sub)files have changed
- For structured data (databases etc.), do we backup the entire file (or file group) or only the portions that have changed?
Block level backups are usually pretty straight forward, there's a mechanism that keeps track of the changed in "realtime" (this usually enables CDP, but that's a whole different story) and when the time comes the data will be moved to the data repository, but this technology has its own challenges
- Minimum granularity is usually a volume
- Hard to exclude unused file data (page file?)
- Recovering files from a block level backup
- Communicating with applications (and File System) to ensure backup consistency.
Generally speaking block level backups have a "lower overhead" than file level backup, so, if you decided to virtualize your environment and keep using agents on the individual virtual machines, you would probably want to use a block level backup solution. File Level backups are still viable (especially if they skip the "indexing" process by using an FS filter or journaling and allow for "sub file" incremental backup), but you will need to be more careful when planning your backup windows in order to prevent VM sprawl.
Stay tuned, next we'll discuss other approaches such as proxy backups
The count down is on... with only 2 weeks left to Pulse 2010, I wanted to give you and update on additional perks you'll have access to if you register and attend. Meet the Experts! Talk one-on-one with Product Experts
Booth 80: SAN Volume Controller and Tivoli Storage Productivity Center – storage virtualization, storage resource management, data discovery
Optimizing Infrastructure: Smarter Systems, Storage and Information Retention Zone
Booth 92: IBM Information Archive and IBM Smart Archiving Strategy
Booth 93: IBM XIV: Storage Reinvented
Booth 95: IBM System Storage DS8000 Series
Delivering Business Value with Smarter Services
Booth 79: IBM Storage Enterprise Resource Planner
Check out my previous blog,The Pulse Roadmap to Storage Expertise, for information on some of the sessions that you can attend. Use the on-line agenda tool
to build your agenda and view all the sessions available (requires only
an IBM.com password - you do NOT have to be a Pulse registered attendee
to create a Pulse schedule online).
Share Your Story This year at Pulse 2010 we are scheduling video tape interviews with clients who are willing to share their thoughts on what they are doing to achieve visibility, control, and automation in their infrastructure. We will be filming client videos at Pulse starting Sunday, February 21, through Wednesday, February 24. The content will be used to produce short videos that we will leverage to tell the needs clients are addressing in their organizations. Our customers have been sharing their stories throughout 2009 as you can see below. Interested in participating? Notify me at email@example.com
The last time I blogged I was telling you about IBM Tivoli Storage FlashCopy Manager on Windows and just how cool it was. Well, I am working on some more neat stuff and I wanted to tell you about a beta program for upcoming release of FlashCopy Manager. It is called the Beta program for IBM Tivoli Storage FlashCopy Manager. If you want to test some of the new functions and features of the upcoming release of IBM Tivoli Storage FlashCopy Manager, please contact Mary Anne Filosa (firstname.lastname@example.org) or your IBM Sales representative to get details.
The enrollment period is ending soon, so don't wait to be a part of the action!
Live Demo! IBM Tivoli Storage Manager FastBack and IBM Tivoli Storage Manager FastBack for Exchange Scheduled Dates in 2010
Mark Your Calendars! IBM will be presenting a series of live demonstration dedicated to showing the value of IBM Tivoli Storage Manager (TSM) FastBack and TSM FastBack for Exchange data protection products. These additions to the TSM product family offers the ability to meet aggressive Recovery Point and Recovery Time Objectives in an organization's data protection service.
The TSM FastBack family provides many advanced features including: Instant Restore allows users to access to their data or application immediately, while the restore is taking place. Continuous Data Protection sends backup data continuously which allows a recovery to be done to any point in time. Incremental Forever Backups prevents wasting time and money in performing and storing unnecessary full backups. Each backup appears to be a Full backup, but only the blocks that have been modified are copied. FastBack Mount allows access to backed up data without it being recovered. This enables data to be validated after backups, the correct data to be identified before it is recovered, or data to be opened and its contents to be recovered at a more granular level, thus reducing the size and time of the recovery. Exchange Brick-level Recovery allows individual Exchange mail objects to be recovered from a previous backup without requiring an entire Exchange Database to be recovered. TSM FastBack for Exchange does not require additional backup processing to provide IMR. Branch Office Disaster Recovery allows replication of branch office backup data to a central site. This data can be compressed and encrypted during the transfer. The replicated data at the central site can be used as the source for creating a tape copy of the data or for recovering branch office data and hosts. TSM FastBack allows the backups and replication of multiple branch offices to be monitored with a single tool. TSM FastBack Bare Machine Recoveryallows hosts to be quickly recovered, even to dissimilar hardware.
These demonstrations are open to Customers, Business Partners and IBM employees.
TSM FastBack Demo Schedule for 2010: Demos will be available in English and Spanish. All English calls will be at 10:30 AM and 3:00 PM Central Time on Thursdays. All Spanish calls will be available at 1:00 PM Central Time on Wednesdays.
February: Wednesday 10th - Spanish 1:00 PM CT, Thursday 11th - English 10:30 AM CT , Thursday 25th - English 3:00 PM CT
March: Wednesday 10th - Spanish 1:00 PM CT, Thursday 11th - English 10:30 AM CT, Thursday 25th - English 3:00 PM CT
April: Thursday 8th - English 10:30 AM CT, Wednesday 14th - Spanish 1:00 PM CT, Thursday 22nd - English 3:00 PM CT
May: Wednesday 12th - Spanish 1:00 PM CT, Thursday 13th - English 10:30 AM CT, Thursday 27th - English 3:00 PM CT
June: Wednesday 9th - Spanish 1:00 PM CT, Thursday 10th - English 10:30 AM CT, Thursday 24th - English 3:00 PM CT
July: Thursday 8th - English 10:30 AM CT, Wednesday 14th - Spanish 1:00 PM CT, Thursday 22nd - English 3:00 PM CT
August: Wednesday 11th - Spanish 1:00 PM CT, Thursday 12th - English 10:30 AM CT, Thursday 26th - English 3:00 PM CT
September: Wednesday 8th - Spanish 1:00 PM CT, Thursday 9th - English 10:30 AM CT, Thursday 23th - English 3:00 PM CT
October: Wednesday 13th - Spanish 1:00 PM CT, Thursday 14th - English 10:30 AM CT, Thursday 28th - English 3:00 PM CT
November: Thursday 4th - English 10:30 AM CT, Wednesday 10th - Spanish 1:00 PM CT, Thursday 18th - English 3:00 PM CT
December: Thursday 2nd - English 10:30 AM CT, Wednesday 8th - Spanish 1:00 PM CT, Thursday 16th - English 3:00 PM CT
There are Web Conference and Audio Conference components to this demonstration. Web Conference www.sametimeunyte.com Conference ID is FASTBAK Prior to the web conference, we suggest you do the following: 1) go to www.sametimeunyte.com 2) click on Support 3) click on Lotus Sametime Unyte Meeting System Check 4) Select attendee type and click Next 5) Proceed with the system check and install any plug-ins required.
English Live Demo Audio Conference: Title: TSM Fastback LIVE Demo Passcode: FASTBACK Toll Free: 800-857-4143 Toll: 773-756-0845
I don't know about you, but I have been virtualizing like crazy over the last few years, humongous servers have been turning into medium sized virtual machines, test and lab environments had turned into small files running on my laptop from a flash drive. My IT department have been virtualizing even more, consolidating servers, sharing storage resources among multiple machines and converting NICs (Network Interface Cards) into virtual switches (I still haven't figured out how they did that). The move into a virtualized environment is very useful for reducing energy consumption, decreasing physical server and storage foot print and driving up processor and storage utilization but it also has some side effects when it comes to data protection. The problem begins at the same place that drove us into virtualization to begin with, resource sharing, You may now have 10 virtualized servers running on the same physical host, if your backup process consumed only 5% CPU and IO on your physical server, imagine what would happen if all 10 virtual machines kick off the backup process at the same time... There are multiple valid approaches for providing data protection to those virtual machines and I’ll try to address each and every one of them in upcoming blogs…
File based VS block based backups
Keep your existing backup methodology (Agent-based backup)
Perform the backup through the host (VMware console/hyper-v host OS)
Hardware based snapshots
Utilize vendor specific APIs that provide "agentless" or off-host backup (VMware's VCB and vStorage)
Other enhancements that might not necessarily be backup related but have to be seriously considered when virtualizing include
Deduplication (client side or target side)
Stay tuned, I’ll be going into more details around File Based VS Block Based backups in my next blog.
Yesterday, in discussing IBM's fourth quarter 2009 financial results, IBM CFO Mark Loughridge had this to say about Storage Software:
"Tivoli storage continued its robust growth as customers manage their rapidly growing storage data. Data Protection as well as Storage Management grew double digits, with broad-based geography and sector growth."
Have you played around with IBM Tivoli Storage FlashCopy Manager on Windows yet? If not, maybe it's time to take a look.
When you think of FlashCopy Manager, think of snapshots. FlashCopy Manager provides fast application-aware backups and restores leveraging advanced snapshot technologies. I have been writing software as a developer for IBM Tivoli Storage Manager for almost 20 years now and this technology is one that is changing the industry. Yes, snapshots have been around for a while, but it isn't until the last few years that applications are really starting to embrace them, and in some cases, even require them for their backup needs. There is just too much data to process, too much overhead to back them up, and too little time. People want their applications to serve email and provide access to database tables, not spend their precious cycles on backups. FlashCopy Manager helps address these issues.
FlashCopy Manager follows up on the heels of IBM Tivoli Storage Manager for Copy Services (TSM for CS) which provided snapshot support for Microsoft SQL Server and Microsoft Exchange Server using Microsoft's Volume Shadow Copy Service (VSS). The really cool thing is that you do not need to have a TSM Server in order to use FlashCopy Manager to manage your snapshots. It will work completely stand-alone if you want. But, if you have a TSM Server already, you can use it to extend the power of FlashCopy Manager even more.
What is VSS? VSS is Microsoft's snapshot architecture. It provides the infrastructure for applications, storage vendors, and backup vendors to be able to perform snapshots in a federated and efficient way. Microsoft thinks VSS and snapshots are important enough to require any new software releases that come out of Redmond to be able to be backed up and restored using VSS. If you are running Microsoft Exchange Server or Microsoft SQL Server, you should take a look at snapshots. Microsoft has been supporting snapshots with Exchange and SQL for years, but Microsoft Exchange Server 2010 is kicking it up a notch. Microsoft Exchange Server 2010 is only supporting backups through VSS. Yes, you heard it right, Microsoft does not support legacy style (streaming) backups with Exchange Server 2010. So, if you are planning a move to Exchange Server 2010, it really behooves you to start looking at Microsoft's Volume Shadow Copy Service (VSS), how it works, and the benefits and complexities it brings with it.
Microsoft's Volume Shadow Copy Service (VSS) is complex and involves multiple moving parts. It will pay for you to invest some time to understand more about it. I have put together some links that will help you get started:
Living in Boulder, Colorado, I am constantly hearing about "green" initiatives such as recycling, composting, alternative transportation, etc. Over the past several years, my family has been doing a much better job of lessening our impact on the Earth through things such as recycling, buying environmentally friendly products and even signing up for energy saving smart grid technology.
I appreciate when corporations also do their part to reduce their environmental impact by leveraging greener technologies. But let's face it, most corporations act based on the impact to the bottom line (both real or perceived) rather than the impact to the environment. Companies like IBM can make the decisions easier for clients by building products that improve performance while reducing energy or other environmental impacts.
I'm proud when IBM delivers "green" technology and thus wanted to point your attention to this video about energy efficient storage. Craig Smelser, VP of Security and Storage Development at IBM Tivoli, introduces some of the storage challenges that can be addressed with energy efficient IBM storage software solutions.
We have gathered a team of SMEs from various areas of the business to discuss a variety of topics, spanning different interest areas including customer success stories, upcoming events, Business Partner spotlights, technical tips and tricks, product strategy, roadmaps and hot topics -- and of course, topics of interest to you!
Introducing the team!
BJ Klingenberg: Senior Technical Staff Member - Storage Software, IBM Software Group BJ has over 25 years of storage software strategy and development experience. He has held various technical and management positions, nearly all of which have been related to storage software. His experience in Enterprise storage management includes DFSMS, DFSMShsm, DFSMSdss, and also Tivoli Storage Manager, Tivoli Storage Productivity Center (TPC) as well as System Storage SAN Volume Controler (SVC). He has also been involved in projects which apply ITIL management best practices to Enterprise Storage Management. BJ is currently focusing on storage archiving solutions. BJ is a graduate of the University of Illinois Urbana/Champaign where he received a Bachelor of Science degree in Computer Science, and holds a Master of Science Degree in Computer Science from the University of Arizona
Dave Rice: Business Partner Marketing, Tivoli Storage Software Dave currently works in IBMs Worldwide Software Group where he drives Business Partner Marketing for Tivoli storage software and also has a focus on Asia Pacific and Japan geographies. In this role, Dave influences Business Partner sales pipeline through, lead/pipeline analysis, progression activities, partner communications, and implementing programs that provide Business Partner Opportunity Identification. Dave has been in a broad set of storage software marketing roles for the past 13 years, and has 35 years with IBM. Outside of IBM, Dave's interests include astronomy, as well as home and life improvement projects.
Del Hoobler: Senior Software Engineer Del is a Senior Software Engineer that has worked for IBM for over 20 years in software design, development and services. For the past 13 years, he has worked on designing and developing software products for the IBM Tivoli Storage Manager (TSM) suite of products. Most recently, Del was the technical development lead for the TSM Windows snapshot (VSS) support for Microsoft Exchange Server and Microsoft SQL Server. Del enjoys working with people and helping solve their complicated IT problems.
Devon Helms is currently an intern with the IBM Tivoli Software group and a second year MBA candidate at the Paul Merage School of Business at UC Irvine. His studies are focus on business strategy and corporate finance. Before returning to the academic world to pursue his MBA, Devon was a business operations and technology consultant. He has been involved in hundreds of engagements, analyzing and improving his customers business processes. After his studies are complete, Devon wants to continue to help clients improve the performance of their businesses through business process and financial analysis. In his free time, Devon is an avid marathon runner, rock climber, and SCUBA diver. Devon lives in Lakewood, CA with his lovely wife, Shana and his 8 year old Siberian Husky and faithful running partner, Frosty.
Greg Tevis: Tivoli Storage Technical Strategist Greg has over 27 years in IBM storage hardware and software development. He worked in ADSM/TSM architecture and technical support in the 1990s and was one of the original architects of IBM's storage resource management solution, Tivoli Storage Productivity Center (TPC). He currently has responsibility for technology strategy for all Tivoli Storage and was involved in all of the recent IBM Storage acquisitions including XIV, Diligent, FilesX, Novus Consulting, and Arsenal Digital.
Jason Davison Jason has been the product manager for the Tivoli Storage Productivity Center (TPC) family since joining IBM in 2006. Prior to joining IBM, Jason was a product manager at EMC and Prisa Networks, responsible for the road map and strategy of various storage management offerings. When not helping define the direction for TPC, Jason acts as the President for Classic Soccer Club, a youth soccer club where his son currently plays.
John Connor: Product Manager John is the Product Manager for IBMs flagship data protection and recovery offerings, the Tivoli Storage Manager family. During Johns tenure as product manager, TSM has experienced strong growth; growing faster than the overall market, and gaining market share. Prior to joining the Tivoli Storage Manager team in 2005, John helped drive the business strategy for IBM Retail Store Solutions. Prior to that, John had product and marketing roles in various IBM software businesses including WebSphere and networking software. John has an MBA from Duke University and an undergraduate degree in electrical engineering from Manhattan College. In his spare time, John enjoys competing in triathlons and has successfully completed an Ironman triathlon.
John R. Foley Jr.: Product Marketing Manager John is currently a marketing manager within IBM's Tivoli storage software marketing team. John has over 20 years of experience in the areas of storage hardware, storage software and system networking. He has held positions in management, product line management, strategy, business development and marketing. In the past 10 years, he has served on multiple storage projects including SAN storage (fibre channel & iSCSI), Network Attached Storage (NAS) and fibre channel switch offerings. Most recent projects include the introduction of IBM's System Storage N series portfolio stemming from the NetApp OEM agreement and the release to market of IBM's newly introduced Tivoli Storage Productivity Center Version 4 and IBM Information Archive Version 1.
Kelly Beavers: IBM Storage Software Business Line Executive Kelly joined the IBM Storage Software team in 2004 as Director of Strategy and Product Management for Storage Software and Solutions. Her team is responsible for guiding the development and release of products that capitalize on market/technology trends, and for defining and executing tactical go-to-market plans for IBM storage software solutions across both the Tivoli and Systems Storage brands. Kelly has 28 years with IBM where she's held a variety of roles including Finance, Pricing, Tivoli Channel Development, Director of Customer Insight, managing Market Intelligence, Customer Relations and Marketing Operations. Kelly is married with two daughters, ages 19 and 12.
Matt Anglin: Tivoli Storage Manager Development Matt has been a member of the Tivoli Storage Manager Server Development Team for 15 years. His areas of expertise include data movement to and within the server, deduplication, shredding, and DB2 interactions. He is the AIX platform export in TSM, and is knowledgeable about other Unix, Linux, and Windows plaforms. Matt lives in Tucson, Arizona.
Matthew Geiser: Manager, Storage Software Product Management Matt joined IBM in 2001 and has worked in product management and product development for Storage Software offerings including SAN Volume Controller, Tivoli Productivity Center, Tivoli Storage Manager and IBM Information Archive. Matt's current responsibilities include managing the product management team for the storage infrastructure management offerings. Prior to IBM, Matt worked in a variety of operations, project management and software development roles in the banking and energy industries.
Milan Patel: Senior Product Marketing Manager Milan is responsible for Product Marketing of IBM storage software for virtualized server environments, storage clouds and of course every day issues in storage management like backup, recovery, archiving and replication. Milan has been with IBM for over 6 years working in server and storage systems and storage software marketing groups. Prior to that, Milan spent 13 years in various capacities from development to product management of various server subsystems and systems management.
Richard Vining: Product Marketing Manager Rich is the Product Marketing Manager responsible for the IBM Tivoli Storage Manager portfolio of products. Rich joined IBM in April 2008 as part of the acquisition of FilesX, where he served as Director of Marketing. Rich has more than 20 years of experience in the data storage industry, holding senior management roles in marketing, alliances, customer support and product management at a number of leading edge companies, including Signiant, OTG Software, Plasmon and Cygnet. Rich enjoys eating, drinking, travelling and golfing (but doesn't everybody?)
Rodney Fannin: Worldwide Channel Manager, Tivoli Storage Software Rodney has over 15 years of experience in working with Business Partners. Primary responsibilities include refining the channel strategy for Storage software and developing sales and marketing tactics to increase reseller revenue worldwide. Rodney is also a contributing author for the BP Spotlight on our blog.
Roger Wofford: Product Manager Roger is currently a Product Manager in Tivoli Storage Software. He has experience in Manufacturing, Development, Marketing and Sales within IBM. He enjoys golf, swimming and the Rocky Mountains. Roger plans to blog about how customers use archiving solutions in their storage environments.
Ron Riffe: IBM Storage Software Business Strategist Ron is currently the business strategist for IBM Storage Software. During the last six years, Ron has been devising and implementing IBM's storage software strategy with a focus on creating greater client value through integrating IBM storage software and storage hardware offerings. Ron has managed storage systems and storage management software for more than 23 years, holding positions in senior management, product line management, strategy and business development for both IBM System Storage and IBM Tivoli Storage. Ron has written papers on the synergies of storage automation and virtualization and frequently speaks at conferences and customer locations on the subject of storage software. Prior to joining IBM, Ron spent 10 years as a corporate storage manager for international manufacturing firm Texas Instruments after receiving a B.S. in Computer Science from Texas A&M University.
Shawn Jaques: Manager, IBM Tivoli Storage Product Management Shawn has been in his current role as manager of storage software product management for nearly three years. The team is responsible for product strategy, content, positioning and pricing of IBM storage software solutions. Prior, Shawn had product and market management roles in other Tivoli product areas as well as a stint in Tivoli Strategy. Before joining IBM, Shawn was a Consulting Manager at Cap Gemini consulting and an Audit Manager at KPMG. Shawn has a Master of Business Administration from The University of Texas at Austin and a Bachelor of Science from the University of Montana. He lives in Boulder, Colorado and enjoys fly-fishing, skiing and hiking with his wife and kids.
Terese Knicky: Analyst Relations Tivoli Terese is with Tivoli's analyst relation team covering Storage, System z, Job Scheduling and IBM's General Enterprise solutions. Terese was born and raised in Omaha, NE and transplanted to Texas where she enjoys watching her two boys play college football.
And finally, let's talk about me. I'm Tiffeni Woodhams and I have been with IBM for nearly seven years. Currently, I am a Tivoli Storage Marketing Manager where I am responsible for general marketing activities, ranging from pipeline measurement and tracking, providing marketing execution guidance and communications to the geography teams; Tivoli Storage Social Media lead and co-lead for IBM Storage Social computing strategy. I also work on major launches like Dynamic Infrastructure and Information Infrastructure providing the storage messaging and linkages. Prior to this role, I have held several other marketing positions including Tivoli Provisioning Go-to-Market Manager, Benelux Software Marketing Manager focusing on Tivoli, WebSphere, and Lotus, Americas Tivoli Marketing Manager, and Tivoli Launch Strategist. In my spare time, I enjoy playing sports (basketball, softball, and golf), coaching JV girls basketball, riding horses, and spending time with family and friends.
Now that you know a little background on each of the team members, we hope that you will let us know some of your interest areas when it comes to IBM Storage and IBM Tivoli Storage Software solutions. Please post comments to this blog and let us know what you want to hear about.
Some topics we will be discussing in the next month include: Pulse 2010, the Premier Service Management Event Data Reduction - the steps to get to where you want to be Archiving - why you need to do it Upcoming Webinars Unified Recovery Mangement New Product announcements and roadmaps.
Thanks and we look forward to hearing your feedback.