TSM FastBack for Workstations is a centrally-managed solution that reduces the risks of losing important information stored on thousands of personal computers across an entire enterprise, as described here:
IBM will be running a beta program for the next release of this product, providing those taking part with early access to the latest planned enhancements. If you would like to participate, please contact the beta coordinator, Matthew Boult (firstname.lastname@example.org).
Cloud & Smarter Infrastructure Storage Blog
with Tags: backup-recovery X
Announcing a Beta Program for the Next Release of Tivoli Storage Manager (TSM) FastBack for Workstations
Omar Vargas 270003BXXK email@example.com Tags:  fastback-for-workstations cdp real-time-backup tsm continuous-data-protectio... backup-recovery 823 Visits
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  storage-blog data-reduction ibmstorage data-management dynamic-infrastructure pulse2010 pulse data-recovery data-availability ibm backup-recovery storage-software data-protection ibmpulse tivoli virtualization service-management 2 Comments 1,589 Visits
The count down is on... with only 2 weeks left to Pulse 2010, I wanted to give you and update on additional perks you'll have access to if you register and attend.
Meet the Experts!
Talk one-on-one with Product Experts
Visit the Expo!
Share Your Story
This year at Pulse 2010 we are scheduling video tape interviews with clients who are willing to share their thoughts on what they are doing to achieve visibility, control, and automation in their infrastructure. We will be filming client videos at Pulse starting Sunday, February 21, through Wednesday, February 24. The content will be used to produce short videos that we will leverage to tell the needs clients are addressing in their organizations. Our customers have been sharing their stories throughout 2009 as you can see below. Interested in participating? Notify me at firstname.lastname@example.org
Oren Wolf 270002KMMG email@example.com Tags:  backup tivoli hyper-v tsm storage-software storage backup-recovery storage-management vmware storage-blog 1,092 Visits
I don't know about you, but I have been virtualizing like crazy over the last few years, humongous servers have been turning into medium sized virtual machines, test and lab environments had turned into small files running on my laptop from a flash drive.
My IT department have been virtualizing even more, consolidating servers, sharing storage resources among multiple machines and converting NICs (Network Interface Cards) into virtual switches (I still haven't figured out how they did that).
The move into a virtualized environment is very useful for reducing energy consumption, decreasing physical server and storage foot print and driving up processor and storage utilization but it also has some side effects when it comes to data protection.
The problem begins at the same place that drove us into virtualization to begin with, resource sharing, You may now have 10 virtualized servers running on the same physical host, if your backup process consumed only 5% CPU and IO on your physical server, imagine what would happen if all 10 virtual machines kick off the backup process at the same time...
There are multiple valid approaches for providing data protection to those virtual machines and I’ll try to address each and every one of them in upcoming blogs…
Other enhancements that might not necessarily be backup related but have to be seriously considered when virtualizing include
Oren Wolf 270002KMMG firstname.lastname@example.org Tags:  virtualization storage storage-software storage-blog data-protection backup-recovery data-recovery backup 1,112 Visits
on my previous blog i've discussed some of the viable approaches to data protection with virtual machines, before i dwell into the pros and cons of each approach i'd like to discuss the fundamental differences between file level and block level backup (and solicit your input :-) ).
Encapsulation is one of the basic rules for software design, simply put, it's the computer geek's equivalent of the famous "Don't ask, don't tell" policy. The idea is pretty simple, let's assume our File System is component A and our Disk System is component B. Component A and B publish a public interface that others can use, but they hide their internal mechanisms from the other components. This enables us to do some nifty tricks, such as RAID, as far as the file system is concerned it is working with a "regular disk", it is unaware of the fact that our disk system had actually taken the 100GB disk space that we defined and partitioned it into multiple strips that are actually located across 5 different disks in order to provide it (the FS) with better performance and hardware fault tolerant. There are other ways where this principle is used but you have to agree that it comes in pretty handy.
But, why do i even mention "encapsulation", and how is that relevant to File VS Block level backups?
The point i am trying to make is that the Disk level is not aware of the "file contents"and the File System is not aware of the "disk layout", this actually dictates the pros and cons of those two very different approaches to data protection.
With file level backups it's really easy to define which files you want to protect, than when the time comes, someone has to access the files and move the data they contain to some sort of data repository, in order to do that you must deal with issues such as:
- Open files
- Interdependencies between multiple files
- Identifying which (sub)files have changed
- For structured data (databases etc.), do we backup the entire file (or file group) or only the portions that have changed?
Block level backups are usually pretty straight forward, there's a mechanism that keeps track of the changed in "realtime" (this usually enables CDP, but that's a whole different story) and when the time comes the data will be moved to the data repository, but this technology has its own challenges
- Minimum granularity is usually a volume
- Hard to exclude unused file data (page file?)
- Recovering files from a block level backup
- Communicating with applications (and File System) to ensure backup consistency.
Generally speaking block level backups have a "lower overhead" than file level backup, so, if you decided to virtualize your environment and keep using agents on the individual virtual machines, you would probably want to use a block level backup solution. File Level backups are still viable (especially if they skip the "indexing" process by using an FS filter or journaling and allow for "sub file" incremental backup), but you will need to be more careful when planning your backup windows in order to prevent VM sprawl.
Stay tuned, next we'll discuss other approaches such as proxy backups
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data-reduction ibmstorage pulse2011 data-backup backup-recovery storage-management data-availability ibmpulse storage-blog storage-software pulse 1,446 Visits
Pulse 2011 Call for Speakers Opens Wednesday, September 22!
Boy oh boy, time sure flies when you're having fun. It seems like I was just at Pulse 2010 in Las Vegas, being a roving reporter, capturing customer, business partner and Subject Matter Expert Videos. It's actually been about nine months and once again it's time to ramp up for Pulse 2011.
Pulse will return to the MGM Grand in Las Vegas February 27 through March 2, 2011. Just like Pulse 2010, we're looking for client speakers to share their success stories and speak in the different track sessions. Do you have a storage success story? What are you doing to make your organization smarter when it comes to storing and backing up your data? How do you gain visibility across your infrastructure, including your storage environment? Are you in control of your data, no matter where it resides? How have you leveraged automation technologies to manage the explosion of data, and the need for instant accessibility? We want to hear from you! What software, hardware and services are you utilizing to deliver better services within your organization, to your internal and external customers? Come share your story of how you're using IBM Storage as a part of your organization's Integrated Service Management implementation.
At Pulse 2010, there were over 300 client speakers and if you weren't a speaker then, you should definitely submit your proposal for Pulse 2011. Check out the benefits of being a client speaker!
Client Speaker Benefits:
Pulse 2011 client speakers will receive complimentary registration to the conference and the first 50 to submit a proposal will receive a FREE hotel accommodations upgrade* to a Celebrity Spa Suite at the MGM Grand if the proposal is accepted!
*The speaker pays for the basic room and will be awarded the upgrade if they submit one of the first 50 papers to be accepted.
Client Speaker Benefits include:
Read Jennifer Dennis' blog Pulse 2011 Call for Speakers - Opens 9/22 @ibm.com/pulse! for details on submitting your proposal. Don't delay, get prepared to submit your proposal right away, those 50 upgrades will be going fast!!!
Here are some customer speaker interviews I did during Pulse 2010, hopefully this will give you an idea of what you can submit for your proposal.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data-backup storage-management ibmpulse pulse2011 ibmstorage backup-recovery storage-software data-availability pulse data-reduction storage-blog 758 Visits
In response to: Pulse 2011 Call for Speakers - Share Your Storage Success StoryCall for Speakers has been delayed until Friday 24th of September 2010
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  vcuhs pulse2010 ibmpule pulse storage-software backup-recovery ibmstorage tivoli storage-blog 2 Comments 1,422 Visits
Yesterday I interviewed Greg Johnson, CTO and Director of Technology and Engineering Services for Virginia Commonwealth University Health System (VCUHS). Greg presented at Pulseon Tuesday and he discussed how VCUHS is transforming IT in a healthcare environment focussing on their storage solutions and backup and recovery solutions. If you weren't able to attend Greg's session on Tuesday at 2:00 - 3:00 pm in the Conference Center room 120, watch the video below and you'll see a high-level recap of what he presented.
Once again, this was a live interview from outside the expo hall in the MGM and the McCarran International Airport, sure is one of the busiest airports in the world... maybe I should have done my interviews inside the conference. I enjoyed the fresh air and the airplanes in the background just adds to the beauty of the live interview. I still think that Journalism is a field that I will not be pursuing... hopefully my interview skills will improve before Pulse 2011, which will be Feb. 27 - Mar. 3 2011.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli storage information-infrastructur... virtualization pulse ibmpulse pulse2010 data-availability data-management tsm backup-recovery dynamic-infrastructure data-recovery ibmstorage ibm data-reduction storage-blog data-protection tivoli-storage 1,604 Visits
With only 4 weeks until Pulse 2010 - The Premier Service Management Event - Optimizing the World's Infrastructure, I thought it might be helpful to provide some details around the sessions and activities that will be available to all of you storage and information infrastructure enthusiasts out there....
Here are a few sessions that you can attend each day. Sign up for these sessions and others today (requires only an IBM.com password - you do NOT have to be a Pulse registered attendee to create a Pulse schedule online)!
Go to the on-line agenda tool to see additional Storage and Information Infrastructure sessions that may be of interest to you. There are also sessions in the Expo Theater Stream.
Register and attend Pulse to take full advantage of all that will be offered:
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  data-avaialbility ibmstorage fastback data-backup backup-recovery disaster-recovery storage-management data-recovery storage-blog storage-software tsm-fastback 1,191 Visits
New Video: ManTech International provides the United States Department of State reduce their backup and recovery time using Tivoli Storage Manager FastBack
Peter Stark Executive director of ManTech International which is under contract to the United States Department of State to provide global IT modernization, of all State Department information
systems around the world. The department operates two physically separate networks around the world for classified and unclassified data with up to 3,000 servers spread throughout the world.
By using the Tivoli Storage Manager FastBack Solution they are able to get eight snapshots a day from the Exchange server each of which is only taking two or three minutes to run and can
recover objects in 5 or 10 minutes whereas previously it was not feasible with a 46 hour backup and recovery time.
Tiffeni Woodhams 270001Q08F WOODHAMS@US.IBM.COM Tags:  tivoli-storage-manager data-recovery tsm data-backup storage-software ibmstorage backup-recovery storage-blog 1 Comment 1,436 Visits
New Video: Tivoli Storage Manager runs a smarter Data Center
Ohio Health has eight member hospitals, nine affiliate hospitals and numerous out-patient facilities throughout Ohio. Their systems run on pSeries hardware for many of the clinical systems with an AIX operating system. They have, between two primary and secondary sites, where they run systems at either site. Backups are critical in their clinical environment because it affects patient care. They use Tivoli Storage Manager for their backup environment. Tivoli Storage Manager writes directly to the primary site and it gets replicated to the second data center. Using a disk-based backup method, they have shaved seven hours off admin processing time because they don't have to write off-site copies.
Watch the video and hear why Ohio Health loves using Tivoli Storage Manager
Maria Huntalas 1200007VFS email@example.com Tags:  backup-recovery virtualization storage-blog data-reduction data-protection healthcare data-availability storage-software ibmpulse 2 Comments 1,457 Visits
VCU Medical Center is one of the leading academic medical centers in the United States and the only academic medical center in central Virginia, offering state-of-the art care in more than 200 specialty areas along with Level 1 trauma care.
For VCU Health System, technology provides the foundation for transforming clinical services and delivering patient care. However, with a heterogeneous storage infrastructure and no single user interface, the team’s three storage engineers faced significant hurdles in managing growing data volumes and recovering data quickly when needed.
Working with IBM, the health system implemented a virtualized, scalable and high-performance storage infrastructure that improves service levels, reduces costs, mitigates risks and supports an increasing amount of data (growing at more than 20 percent annually).
Reduced data recovery time; Process of migrating data has been shortened with greater success probability; Standardization and consolidation of storage systems has reduced the storage footprint and decreased data center temperatures from 43 to 68 degrees; lower cooling and energy cost; Reduced storage spending
“With Tivoli Storage Manager, we can set multiple recovery point objectives and the XIV allows us to keep multiple snapshots of the data without impacting performance. So we can have copies and copies and copies of the data where we couldn’t before.”
—Greg Johnson, CTO and Director, Technology & Engineering Services, VCU Health System
Read the complete case study for more details on how VCU Medical Center worked with IBM to gain uninterrupted data access.
More success stories of other customer implementations of IBM technologies can be found here.