As you may know Blogs are moving to new locations.
You can find my blogs at the following link
I am looking forward to another great year in Data Protection and Retention.
Shawn Brume 0600006D2F firstname.lastname@example.org 114 Visits
As you may know Blogs are moving to new locations.
You can find my blogs at the following link
I am looking forward to another great year in Data Protection and Retention.
Shawn Brume 0600006D2F email@example.com Tags:  vtl vts ts7700 migration jaguar tape ts1100 154 Visits
Technology is an amazing part of our daily lives. We carry the computing power of the computers that helped deliver Astronauts to the moon in the palm of our hands (sometimes in our pockets). Broadband services deliver video chats to the same devices that we hold in our hands, often with no disruptions. We also tend to trade in our technology every 14-18 months, no matter what the technology is. Cell phones, MP3 players, tablets and even laptops have very limited lifecycles. But that is not the case with Enterprise hardware, especially in the Data Protection and Retention infrastructures.
The name plainly indicates that enterprise class data needs protected and retained for long periods of time, decades is the norm. Keeping data in high performance systems does not automatically mean that the technologies are available for long periods of time, not necessarily true when it come to IBM. The pure high quality of IBM systems leads to a history of long periods of support for storage systems.
A tedious, but very simple web search will reveal that the products end up supported for decades, completely protecting the investment that IBM customers make. Table 1 is a brief outline of tape product lifecycles.
*note this does not represent a comprehensive list of products
So what is the point here, the products stay around for a long time, IBM services them for decades and the customers get to truly leverage the quality of the products; a nice reference. Keeping older products in your infrastructure represents a great way to leverage the capital outlay long after the product is amortized, but planning an update can actually be more cost effective to your business.
I won’t pontificate long on the value of generational shifts, that introduce higher performance, higher storage density and features that often reduce management requirements. Think about your 3592 J1A tape drives operating at 40MB/s and 300GB of storage per cartridge. A 1PB repository of data processing 100TB of data per day, requires 3334 cartridges and 15 tape drives operating at 100% duty cycle. 3 Enterprise frames of cartridges to hold the data. Move this to just 3 TS1150 tape drives contained in 1 frame; supporting up to 5.5PB without adding more library frames! 1/3rd the floor space and 1/4th the power. Enabling the storage administrator to add high performance Flash with the savings from tape storage.
In the enterprise VTL space this can produce an even more substantial benefit. The industry leading TS7700 have been available since 2006 and is still in service almost 10 years later. Enabling customers to protect their data and recognize the best possible value for critical data is what IBM DP&R does, enabling a way to easily manage migrations is also a part of what IBM does. There could never be a better time than now to plan a migration of a Generation 1 TS7700 (VEA/V06).
There are many upgrade paths that will enable installation of new TS7720 or 7720T systems. Data can in almost every case be migrated to the new systems without impacting the System Z host. The automated replication is just the beginning of realizing the intangible benefits of migrating to the new systems. Higher performance, 4 million volumes support, along with advanced Flash Copy DR testing are a few of the tangible benefits. In a critical data environment planning for the next decade is the best thing a storage admin can do to ensure quality of operations and prevent emergencies.
Take a look at your environment, the old iron is working great, planning today to replace it will save management time and expense.
Stay Tuned! A New Blog will be coming in late 1Q2016. A link will be provided when available.
Shawn Brume 0600006D2F firstname.lastname@example.org Tags:  lto lto7 ts4500 storage ts1150 data tape ts1100 archive ts3500 286 Visits
In our daily lives we are exposed to all sorts of technologies. Most of which are predominantly around for our entertainment. The developments in video over the last 10 years has taken video to such a detailed degree that pores in a persons skin can be seen from hundreds of yards away with 8K video. Most predominantly used today in sports to allow quick zoom and then transmission at a lower quality, usually 2K.
But what does this all have to do with brain surgery and tape storage. For those who have been to any of the technology shows in the world like NAB, IBC, InterBEE and CES have seen some powerful examples of video production. 360 video that provides virtual reality so real that you feel like you are on the side of a mountain. The brain is one of those ever so delicate surgeries that most of us would like our doctor to be very skilled in, but how does a doctor get more experience quicker and from those that have great skill?
For those that might be old movie buffs the days of the arena seating of “Dr. Kildare” are a thing of the past for delicate surgeries. 4K 3D video is enabling these incredible surgeries to be experienced in a manner that is nearly as good a the real thing. I recently had the opportunity to view one of these videos and was amazed at not just the clarity, but the depth and texture that is experienced. A real surgery of the removal of a brain clot that can provide surgeons with more experience than they could get any other way. 3D imaging is also being used more and more for the surgeries themselves to improve the surgeons ability to navigate delicate areas of the body.
Being a technology person I naturally asked what the technical specifications were on the video. 4K 3D video requires 250GB of storage per min! As a comparison an HD movie in m4v format is around 6GB for the entire 90 minutes. So a 90 minute video of the critical part of a Brain surgery will take 22.5TB of storage. In an education environment multiple resources are critical to ensuring the quality of service. With 141 MD- granting institutions in the US alone, just a single video will consume over 3.1PB of storage.
Each video will most likely only be viewed 1 time per year or less depending on the subject and number of videos, meaning that for the most part these are cold near-line files. Students need access to the files, but not instantly at this size and subject specificity. This is where tape “is” brain surgery!
Producers of these videos can distribute the very large file or files easily and quickly via tape. Once at the school or display location the LTFS media can be placed in to a larger global namespace or other unstructured environment as a near line media. As with IBM Spectrum Archive, the videos are now available to searches and can be accessed through a normal access method (double click/Read operation), yet the cost of that storage is at a cost of less than .2¢ per GB per month.
The savings on storing these huge files with tape enables more funding to the technologies that are required to stream these videos to the viewers; that is going to be Flash. IBM Flash Systems 900 are so fast that the videos can be transformed and streamed out at a rate that will ensure that the viewer will always feel as if they are participating in the operation.
Low Cost, ease of access, longevity, ease of use and transportability are all reasons tape is more applicable to modern storage than ever before. This is just the beginning for big storage for video, imagine the amount of tape storage required when these critical educational films go 8K 3D!
Shawn Brume 0600006D2F email@example.com 136 Visits
I am often asked this question.
On initial review of the LTO tape drive specifications it appears that the Full High and Half High tape drives have the same operational specifications. Table 1 are the published specifications for the IBM LTO7 Full High (FH) and Half High (HH).
Mean Time Between Failure (MTBF) and Mean Swap Between Failure (MSBF) are values that are determined for any single homogeneous system. These values are based on other operational specifications that are specific to the individual drives. Raw performance values such as Capacity and through put are based on best possible values in ideal conditions.
So why would I want a Full high drive instead of a half high drive? Besides the loader life indicated in the specification all of the values are the same. These values do not tell the full story. In order to create a smaller packaging the Half high drives require certain trade-offs.
Let’s take a look at the motors that move the tape in and out of the cartridge. The Full high has a larger and more powerful motor than the half high. This allows the motor on the full high to accelerate at 10m/s/s while the half high can only accelerate at 5m/s/s. This is very impacting when operating in environments where stops and starts are common. Stop and starts, referred to as back-hitching, is common when a host is busy and cannot send or receive data as fast as the slower speeds of the tape drive. It is also seen when the application writing to tape is writing small files to tape with synchs between each file. When millions of files can be written to a single tape the result of using an LTO-6 Full High vs a Half High can be writing the files in as little as half the time. This is because a single back hitch take 1.3 seconds longer to complete on a Half High drive than on a Full high drive.
Back hitching impacts are further impacted by the size of the data buffer. The LTO6 Half High data buffer is considerably smaller than the full high. This means that the greater the delta in input/output between the host and the tape drives the greater the drive must compensate. Since the half high tape drive has a smaller data buffer the probability of the tape drive needing to back hitch is increased. The HH tape drive has less buffer to compensate for the lack of streaming capability from a host.
The motor capability also impacts the maximum locate/search and rewind speeds for the Half High drive. The difference between the FH and HH maximum locate and rewind speeds is only 1 Meter/Second and for small and mid-range applications it will not impact the overall environmental performance. When examining the high locate and seek environments used in enterprise applications the 1 MpS can impact the time to data in the large solutions. The amount of impact is dependent on the size of the solution. For example, if a mount requires locating to end of tape 1 time, reading a small file and rewinding for unload the delta between the Full High and Half High will be 18 seconds to complete the operations.
The loader mechanism in the Full high drive is designed for a higher number of cycles, it is also designed to be more durable for high speed automation environments. In terms of enterprise performance in Tape Automation the best way to improve the time to data is to move the media from a slot to a drive and vice-versa in a very fast manner. This requires a more robust loader mechanism in the drive to ensure the loader life under the higher duty cycles. In an enterprise automation environment where time to data is critical and many swaps of media are expected the Full High provides the best durability.
So looking at Table 2, a full specification of the underlying specs for the drives, we can make the assertion that if I have “enterprise” class of data I should be using a Full High tape drive. Further more I will have more flexibility in my operations when choosing the LTO Full High tape drive.
Shawn Brume 0600006D2F firstname.lastname@example.org Tags:  ts1140 tape data archive ltfs optical lto ts1150 blu-ray retain cd retention lto7 lto6 545 Visits
I recently read a paper by Tom Coughlin that sparked my curiosity. Tom has long been a strong supporter of tape as a medium of choice for long term, lowest cost storage. In his most recent report he teamed with Steve Hetzler analyzing Access and Touch rates and the strengths of certain storage mediums in relation to touch rates and access time to data. The duo also touched on what could be considered the two leading “archive” mediums, Tape and Optical.
Now, let me preface that for the basic touch rate models established I give this report 100% credit, it is a fairly considerate analysis. I also would never expect a report of any kind to be all encompassing. Where I take exception to this report is in the tape specific attack on “Failure Rate” of tape for hardware or Media, which is not addressed for any other mediums.
I tend to be a simple person and try to look at the data in terms and calculations that we all can understand. I also like to reference data that has be proven on more than one occasion. Take for instance the studies from The University of Santa Clara, The Gartner® Group and IDC that indicate that between 65% and 85% of data is never accessed again 90 days after creation. Meaning any study of touch rate associated with unaccessed, near-line data should take in to account the references that indicate that only 35% of the data will ever be accessed again; most likely much less by the time it is tiered to Tape or Optical.
So let’s do a quick review of the data for tape and optical used for the touch rates, keeping in mind I am not disputing the touch rate possibilities. The analysis assumes that the touch rate should be based on the closest cost the solution can be to the cost of media. In the Tape case it is 1 tape drive to 400 tapes, 1 PetaByte of data; Optical is assessed at 1 drive for 4 TeraBytes of data. If we are assuming that this is to be an enterprise solution for Archiving data I have to be realistic, I mean I have 12TB of data in my house. Honestly, at 4TB, I will just buy a NAS RAID from Office Depot and swap it out every 2 years. Lets say that the archive for the enterprise is 1PB just to make it easy, I love the sound of the words Peta Byte; only Yotta sounds better.
If we retain the touch rate of both solutions for 1 PB we have 250 optical drives managing 20,000 pieces of media for infrequently accessed data, in comparison to 1 tape drive and 400 tapes. Assuming current optical and LTO6 technology densities and prices we can make some very simple comparisons for durability of an archive solution.
Addressing the “failure rates” of tape drive solutions we can quickly get the same data for Optical:
- Optical tray loader life is ~40,000
- Random seek life of the Optical arm is ~4,000,000
-MTBF is around 70,000 hours at 10% duty cycle.
*90% duty cycle is assumed in the report
*note that ALL hardware and magnetic media have failure rates associated with Bit error, MTBF, and/or MSBF. I make no reference to disk in this comparison because without extensive algorithms managing power consumption, the TCO is no comparison; See Clipper Group Study on the 9 year Archive TCO for proof.
Where, in the data world, does this leave us in figuring out how these numbers impact a near-line “Archive” solution? It leaves us with the following simple calculations based on the touch rates from the report:
1. Assumes no hardware warranty coverage, If hardware warranty, no cost to the customer
2. Notice the cost of drive replacements is equivalent to an enterprise tape automation and drive solution.
3. Media cost is based on the average cost from a random sampling of 6 different web prices.
4. Based on Average price of $120 USD per drive
Are the touch rates equivalent...NO, do they need to be for the repository of data that is, for the most part, untouched forever? My expectation is that recall of data in an “Archive” environment is only about 5% at most in any given year, but I have to use the assumptions put forward in the report. The next argument we can bring forward is that the optical media will last 100 years and Tape only lasts 30, and optical has a history of being able to downward read the media 30 years later. Tape drive hardware quality has meant that many working samples of hardware are still available 30 years after the initial introduction, so this is a “tit-for-tat”. Getting down to it, it makes fiscal sense to migrate data to newer technologies. For an analysis of this, you can see the 50 year TCO blog.
The challenge to every IT professional is managing expectation of data. Retention of all data has become the defacto standard, it is time to set the realistic expectation that to control costs in an ever expanding data resource environment, access times will be slower, ease of management and lowest possible cost should be prime considerations in “Archive” solutions.
A Spectrum Archive installation with 6 TS1150 tape drives can manage 5.5 PB of uncompressed data. Using an industry standard 2:1 compression in a File System solution, this amounts to 11 PB of data in 10 ft2 of floor space!!! “Touch” that with an optical solution and see what it really costs.
Shawn Brume 0600006D2F email@example.com Tags:  flash retention archive storage blu-ray data disk optical tape ltfs 1,047 Visits
In the scope of natural resources, data, as the "new" natural resource is as fresh as they come. However, the relative newness of data as a true resource is not unnoticed by the predominant data storage medium. Introduced in 1952, tape sits alone in the storage arena as the only historically analytic “new” technology.
A brief history of tape demonstrates that all of the critical data through modern computational business has been nearly solely carried through time on tape. Early in data acquisition tape was the only medium with the data through put and capacity to hold the Gigabytes of data being generated. Through the eighties and early Nineties tape continued to be the most reliable infrastructure for the most critical financial and scientific data. Like it or not all of this data has been the big data for each generation. Currently the Cloud and off-premise managed infrastructure is gaining momentum for both consumers and enterprises.
Tape in the cloud is also gaining momentum, IDC has reported that more than 2% of all energy produced in the United States is used to power the “Open Compute” infrastructures used for Cloud and Social media. The cost of both powering and cooling these systems has lead to entirely new specialists in the IT field. Clipper Group reports that Tape is up to 26 times less expensive in a 9 year TCO study when compared to a spinning disk infrastructure. The power alone for the TCO of disk is more than the entire Tape infrastructure. MAID, the process of powering down disk drives not in operation, has been less than successful in real applications.
Being fair to disk, it does have advantages for concurrency and time to data. Bearing in mind that industry analysts in several studies have determined that between 60 and 80% of all data is untouched after the first 90 days to creation, it is easy to see why concurrency and millisecond access time can be managed even with the retention of critical data.
Looking at an analysis by Robert Fontana and Gary Decad [see diagram 1], an analysis for a 50 year TCO has to take in to account that disk is facing serious industry challenges to over come atomic level limitations. At the same time tape at the current densities has no atomic limitations for, what could be interpolated, to be more than 50 years.
Another consideration of a 50 year TCO is the overall management of the data. Management includes planned migrations of the data, dependability of data in the infrastructure and any manual intervention that is needed in the period of ownership. The common misconception is that tape has to be continuously handled to make the TCO calculation better than disk, in the past this was true only because of the scale of the infrastructures. Modern tape systems can easily manage 120PetaBytes of online data in a single library string (This is calculated without compression in a TS3500 and TS1150 10TB drives). Few infrastructures today require 120 PB of data nearline and available.
But what about the risks of data integrity? Every time I turn around I see some off-premise export application or appliance that randomly brings up that data is at risk on tape much more than anywhere else., Poppycock!! There I said it. With the exception of physical transportation of media, tape is both better for bit error and dependability than disk. The bit error rate of TS1150 and LTO6 is 1E20^, several orders of magnitude above any disk drive available today. So how does that compare to the stated dependability of the cloud providers like Amazon glacier that claim a durability of 99.999999999% (nine 9’s)? A single copy of data on tape offers 12 - 9’s of durability. If we assume that there is at least 2 copies of the data for a disk system to get the 9 - 9’s than the same number of copies of data on tape will yield 24 - 9’s of durability!!!
Speaking of management of the data, it is important to acknowledge up front that electronic data must be planned for migration. All modern digital data including the majority of movie productions, should look at how to easily manage data migrations from generation to generation and from one format to another. Lets be very clear modern data is not paper copies or film canisters and it is not practical to ever think the digital industries will go back to those mediums, just due to sure volumes. I am not going to layout the plans of migration, but will point out migration is not only preservation, but also financial. EVERY storage medium must have a migration strategy, data will not survive without management, even on traditional mediums of film and paper.
The chart below shows both the space and the cost per GB for mediums in the last 30 years a good case for why planned migrations, even with capital expense being a consideration, is financially astute.
*Note that in some instances the introduction date is the commercialized availability date, not the first possible date of introduction.
It is also important to understand that in a 50 TCO study and comparison we normalize that existing technologies will continue with new technologies in storage entering at the extreme high end of the performance optimization for storage.
This blog will also for simplistic assume that there is 0 growth in the data being stored. This is to ensure that we are looking at the future cost of the present data, not the accounting for the total storage cost of future data. I will also make the assumption that the data being stored today must be retained for 50 years with no defensible deletion.
How important are data migration strategies? Very important. Long term capacity optimized storage planning is the only part of IT management that is solutionable only through forward looking expectations beyond the reasonable life of the current technologies.
To calculate the TCO we will lay out a couple of clear calculation assumptions:
Table 2 is a representation of averaged cost per GB for each future period
Using the assumptions set forth and straight line calculations the total TCO for a 50 year retention is in Table 3
Table 3 Cost of ownership by period for tape and Disk.
Tape Migration Cycle is 10 years
Now take cost out of the picture, what is the pain in your organization for managing the extreme long term retention of data? Most IT professionals deal with the issues as they arise. The problem with this way of thinking is that by the time the thought of migration of the data at rest crosses the mid of the ever busy IT professional it may be to late. If the data was on disk, are the disks still accessible and is the data still readable?
No matter how much it is pressed that Disk can last 20 years, it is more likely that after no more than 5 years of inactivity of disk, the data will not be accessible in whole. Tape on the other hand has the ability to last 30 years without losing data access.
Just like paper has lasted for hundreds and even thousands of years, data on tape will continue to be the way to get extreme long life out of data. If Sail boat racing sails can be made to last through the rigors of salt water, light and heat, then the same materials in Digital tape should be trusted to last 10-30 years in properly acclimatized environments.
Decad, Fontana, Metzler: The Impact of Areal Density and Millions of Square Inches (MSI) of Produced Memory on Petabyte Shipments for TAPE, NAND Flash, and HDD Storage Class
What a busy end to the year. Tape continues to be strong in the market place with the new use cases around Cloud, Analytics and Big Data showing the most growth. Interestingly, at the same time that the value proposition of tape continues to be as strong as ever for persistent data, some in the industry would have us believe that the “management” of tape is a monumental task…folks this is not your father’s tape solution!
One article I read recentlyin the Enterprise Storage Forum from a guest author (http://bit.ly/1xZsXgN) does a very high level compare of the ease of use of Amazon cloud. The article goes in great detail to indicate that personnel will be needed to continually physically move tape in and out of the solution in order to grow the tape data storage. This is simply the most limited view of the usage of tape and heralds back to the late 80’s, when data storage could not keep up with the constant demands in the existing physical foot prints. Further in the article the author tries to indicate that the cost of the infrastructure for tape cannot exist as an off premise (off-prem) without that physical movement of tape, again simply note true. With TS1150 drives in Enterprise automation an IT department can Back-up or retain persistent data of up to 150PB un compressed, all under a user accessible file system.
Although alluding to the cost of Amazon Glacier as being much less expensive than tape, it makes no real comparisons of pricing. Well let’s take a quick look at how this would play out.
I will preface this that all pricing is based on available pricing from IBM price with an industry standard discount and prices for Amazon Glacier are direct from the pricing page (http://aws.amazon.com/glacier/pricing/) as of 6/1/2014. I also will say as a storage “guy” I admit that in every case back-up and long-term storage of data require some overhead and in all compares will assume that a Systems Admin level1 takes 1/3 time per month to manage the environment for tape ($2500 monthly).
To make a compare that is logical and not so complex we get lost we will make some assumptions:
• 2PB of data storage
• Single Data archive - this indicates that we are doing a single copy compare
• Note Amazon recommends a second copy of this data is retained in a separate geo in Glacier
• Bandwidth is not an issue with standard internet based transfers
• Scenario 1
• True Archive with with data access as an exception
• 5 Year retention
• Tape must include Servers, Maintenance and Management
• Scenario 2-3
• Archive with retrieval at 10% and 20% per month
• 5 Year retention
• Tape must include Servers, Maintenance and Management
• Scenario 4 – Use as Disaster recovery of 40% of the data
Here is how the Numbers play out:
• Scenario 1 – True Archive 5 year cost
• Glacier: $1.2M (1¢ per GB/Month)
• Tape: $357K (0.2¢ per GB/Month)
• Scenario 2 – 10% retrieval per month
• Glacier: $2.06M (1.7¢ per GB/Month)
• Tape: $357K (0.2¢ per GB/Month)
• Scenario 3 – 20% retrieval per month
• Glacier: $2.66M (2.2¢ per GB/Month)
• Tape: $357K (0.2¢ per GB/Month)
• Scenario 4 – Use as Disaster recovery of 40% of the data
• Glacier: $1.2M (1¢ per GB/Month) $35K 1 time, 229 hours to Full recovery*
• Tape: $357K (0.2¢ per GB/Month), 128 hours to Full recovery
*Data transfer is based on a standard internet connection and assumes no special performance based connectivity has been serviced with Amazon.
So I fully agree, push the data to the cloud with Amazon Glacier and your data can live forever at 1¢ per GB per month, just please do not ever pull it back and if you do, expect the costs to rise significantly. Also be aware that first access to data is specified as 3-5 hours without a guarantee, so RTO must be longer for all data than 5 hours. If you are constrained by CAPEX and need to migrate costs in your infrastructure cloud services like Softlayer are going to become a requirement,
Do not get me wrong, I support Disk and Cloud storage and BaR models, I am a firm believer in the power of tiered solutions to create the most efficient and cost effective solutions.
Don’t be fooled by complex arguments around the physical movement of Tape media. Todays models use “truckless” data replication and tape in the namespace is still the least expensive option.
*Note: At the time of this Blog I checked Amazon and they have recently changed the charge metrics on data retrievals, at first glance the cost of the long term storage comparisons are only minimally affected.
#IBMdpr #IBMStorage #tape
*Disclaimer: Costs indicated in this blog are not a guarantee of price and are for examples only. Real costs and prices for Hardware and service will vary.
Shawn Brume 0600006D2F firstname.lastname@example.org Tags:  ts3500 ts7720 vtl cloud ts7700 grid tape hydra automation ts7740 z/os critical data 1,879 Visits
In 2006 IBM introduced the fondly named “Hydra” Virtualization Engine (VE). The official name of the product is the IBM TS7700 Virtualization Engine. The TS7700 line of products supports the entire spectrum of system z tape operations while maintaining a true enterprise level of business continuance through its industry leading grid support..
TS7700 Started as a replacement for the Bxx line of VTS which was extremely successful. Like the original VTS, the TS7740 supported disk based tape emulation with hierarchal storage management to physical tape, using the IBM TS1100 tape drive family, which started with the 3592 J1A and today supports the TS1140. In 2008 IBM introduced a disk only version of the TS7700 marketed as the IBM TS7720 Virtualization Engine. The TS7720 offers less overall maximum capacity, but provides a 100% disk cache residency for today’s most time critical applications.
Two to six TS7700s can be configured in a grid with systems supporting TCP/IP connected distances across the global. All content stored within one or more TS7700 nodes is accessible from any other node via its industry unique remote access capabilities.. TS7700 is the original “Cloud” storage solution. The TS7740 and TS7720 can be mixed within a “hybrid” grid offering the benefits of high disk cache residency, yet the cost and security benefits of physical tape.
Illustration 1. Grid Cloud Concept
TS7700 Grid provides more than just a means to replicate data. There is no concept of primary, secondary or stand-by nodes. Each node or cluster’s devices within an entire grid always has access to all volumes contained within any cluster in the grid. System z hosts view the entire grid as one large composite library with up 1,536 common devices. Volume data is accessible from any cluster’s devices independent of where copies exist. In the event of a device or site outage, user intervention or host knowledge of where data exists is not required. If a local copy isn’t present, TCP/IP is used as a real-time channel extender to clusters containing a valid copy.
Hundreds of customers have benefited from the TS7700 due to its high degree of flexibility, high availability, reliability, ease of use and of course industry leading performance. Fiducia IT AG is a premiere TS7700 customer running several “hybrid” grids. These grids operate in a 24x7x365 environment supporting Fiducia’s critical system z tape operations. After several years of operation and expansion of the system, Fiducia’s assessment of the TS7700 is:
· IBM TS7700 gives Fiducia a single layer of management of the high performance virtual tape systems attached to IBM z/OS.
· Grid technology in the TS7700 has enabled a business continuity that is unrivaled by any other vendor.
· TS7700 allows Fiducia to flexibly meet performance and capacity requirements for fast access reads and long term tape storage, in a mixed [hybrid] grid environment.
Like Fiducia, many customers place their most critical data in the TS7700 Grid Cloud and were using the Grid long before the Cloud was relevant. The TS7700 has continued to enhance features and functions to meet demanding customer requirements. With over 4000 units in the market, the TS7700 product is and will remain a premiere product for system z’s most critical tape operations.
For more information on TS7700 go to:
Shawn Brume 0600006D2F email@example.com Tags:  gartner vtl tape ibm deduplication protectier emc quadrant pbba idc 2,496 Visits
It has been a while since I blogged and felt the need to get back on track and update the community with my view of ProtecTIER and the Deduplication market in general.
A recent Gartner Magic Quadrant study ranked the current deduplication PBBAs on the market. With the exception of 4 products the remaining market was ranked in the “niche” category. This includes ProtecTIER©. So the question is what does the Magic Quadrant ranking really mean.
In general a Magic Quadrant (MQ) is used to serve as a quick gauge of where a company’s product is in relation to the market in general. This is complicated as it is not ranking what is the best product for a customer, only how the companies and products stack-up to criteria that include intangible effects, like how the market “feels” about a product.
This MQ was conducted at a time where IBM’s competitors were on the attack, misquoting press releases of IBM resource shifts as signs ProtecTIER© was being killed off. This misquoting and the aggressive tactics by IBM’s competitors only means one thing, ProtecTIER© has been and will continue to be a threat to our competition including EMC’s Data Domain. No concerns were listed in the MQ that suggest IBM has any issues with ProtecTIER© that should concern a customer outside of the responses to the attack by IBM’s competitors in the space.
In the ©IDC Driving Change report, IBM ranks third in the PBBA ranking amongst all PBBAs. This includes IBM TS7700 Enterprise PBBA, but on a volumes count these are relatively low in this market. The ranking is also based on number of units not on capacity (Raw or Deduplicated) being managed by the solution. There are no solid numbers around the ranking related to data stored, however ProtecTIER© high availability, Dual node cluster capability, along with the ability to manage up to a PetaByte (PB) of data behind a single installation has positioned it as an enterprise leader in the market that can grow as the customer requirements grow.
ProtecTIER© is definitely the best of breed niche appliances for deduplication. Offering the patented HyperFactor™ technology, high availability dual node clustering, flexible storage with the option to grow without rip and replace, the TS7650G continues to be the choice for enterprises around the world, and the TS7620 offers the best possible entry price. The points Gartner makes in the MQ from ProtecTIERs© introduction still hold true, it is still the most flexible, and easily upgradable solution on the market.
What are some consideration not accounted for in the MQ?
Gateway Architecture: There is absolutely no consideration given to IBM's Gateway approach to deduplication. We are the only vendor on the planet that provides the flexibility of the Gateway design. This has a number of advantages, many financial.
Disk Flexibility: Every other vendor will lock the customer in to a specific disk configuration that is approved for their appliance. IBM offers many options both within IBM as well as non-IBM. Options for storage include the IBM Flash840 with he highest level of performance for storage in the industry. This flexibility ensures both the highest possible performance but also the lowest entry price point of a solution.
Future Upgrades: Since most customers are growing in the 30% to 40% CAGR range, they can expect that the repository configuration will double over the course of the next two to three years. When customers upgrade an appliance after three years they are forced to buy three year old disk (if not older). With IBM, the customer can purchase the best offering at the time. IBM does not know what disk options we will have in 2017, but ProtecTIER will support it. Today's appliances will be stuck on 2013 or 2014 disk for the entire time horizon of the installation.
Box swaps: Due to the limitation on disk offerings many customers choose to simply replace their appliance for a new one at the end of three years. With IBM, the Gateway design provides a longer term approach. IBM has adopted an approach much like a tape library which grows roots into the floor and remains installed for as long as a decade. The appliance model forces customers into a short term three year, box replacement approach which is what one might expect of a disk vendor. If you like the appliance model that's good because you will buy three of them in the next ten years. With IBM, one can upgrade over time, even replacing the Gateway nodes and end up with a package that is as new after ten years as it was on day 1 without replacing the entire system or copying data from an old to a new box.
Improved TCO: Only IBM provides a TCO approach enabled by the Gateway design. If any customer runs an analysis on the financial model of a Gateway incremental upgrade approach and compares that to a complete box swap design bound to occur over a five year horizon, one will quickly discover that IBM is providing the best value. Are you going to dump deduplication at the end of three years and change to a completely new backup architecture? Doubtful. It is important to look at the total time horizon of backup demands as opposed to running a TCO over the three year appliance life span. Isn’t it better to continue with the best possible performance and ease of management without data migration rather than ripping and replacing?
Aggregate License Agreement: Only IBM can provide a single deduplication umbrella for the entire Enterprise. Since most customer virtualize for the exclusive purpose of replicating data, most installations involve at least two systems, a backup system and a replication target. We can aggregate the software licensing so that this is viewed as a single large system significantly lowering pricing. Moreover, by implementing an Aggregate License model, we can assure the lowest pricing for future capacity upgrades. No other vendor can provide this capability. With EMC, if there are two boxes required it will double the cost. This is not true with the Aggregate License approach from IBM. Note that Gartner has criticized our software model but is actually the very thing that allows us to keep our pricing low.
Dual Node Redundancy: IBM's unique dual node architecture provides complete hardware redundancy. If for some reason a system with single points of failure were to become unavailable all backups would cease and all access to backup data would be lost. This is a serious consideration for enterprise level and high availability requirements that is a niche IBM fills better than any other product on the market..
Backup Software Aggregation: For any customer who may be running TSM, IBM has built a special deduplication filter which maximizes deduplication ratios for TSM backups. IBM has also developed a unique pricing alternative for TSM customers, lowering overall costs of the backup solution.
IBM continues development on ProtecTIER©. Feature integration continues to be reviewed and features implemented by the total customer demand. IBM’s continued support of ProtecTIER© has been, is and will be focused on enterprise customer needs, not by size of the account by on the importance of retaining the back-up infrastructure for long periods of time and the ease of managing that data and infrastructure. Data Integrity, availability, capacity, flexibility and upgradability are all key attributes that demanding customers focus on, and so does ProtecTIER.
Knowing IBM has the right focus on Enterprise class data deduce, why would you choose anyone else?
Shawn Brume 0600006D2F firstname.lastname@example.org Tags:  tape ltfs ts4500 ts7700 dp&r protectier 721 Visits
For those of you that could not make the IBM Edge event, it was a great one! I can say that in general and from the perspective of DP&R. There were over 22 tracks associated with DP&R products during Edge, more than any other year. Tape was also a highlight for the main sessions and many of the break out sessions.
A special focus was on the announcement of both the TS4500 High density Enterprise tape Library announcement and on LTFS EE Bundle offering to make the ordering and implementation of LTFS EE easier for the customers. With the announcement of the Elastic Storage Model and Software Define Storage driving new use cases for Storage tape and DP&R products are integral to the success of this strategy. Tape will continue to be the lowest cost tier of storage providing near line storage of persistent data at up to 26x less than the TCO of disk only solutions. Combine this with flash and the future is now, Near instant transactional data that tiers to the lowest cost tier when it becomes persistent.
This excitement was not limited to DP&R presentations, all of storage acknowledged the need for Data protection and tape as a tier in the overall storage picture.
I have already started to see many posts of the Edge sessions on youtube for 2014. As they become available I encourage you to watch them. IBM continues to position storage and the DP&R portfolio for long term success.
You can catch the opening session here Edge2014 GeneralSession - Day 1
In recent months rumors of IBM not supporting ProtecTIER have been circulated by IBMs competitors. IBM does not respond to rumors, but Laura Guio, Vice President Business Line Executive, Storage. recently made a clear statement on IBMs position for the future of ProtecTIER.
IBM and the Data Protection and Retention group are clearly in the business of data reduplication business.
So often there are questions that we all have around the products we have in our IT systems that could be resolved with a quick reference. These inquiries could be a small as a light on the system or how the GUI interface works to do something specifically.
The latest trend is to put some of this up on Twitter.com and let you drive the support you need with these quick issues. IBM has been active in this space for a long time. I encourage you to take a look into some of the support twitter pages and even my twitter for DP&R @IBMdpr. Leveraging these quick references with make your daily operations easier.
@IBMStorageSupt - IBM Storage Support https://twitter.com/IBMStorageSupt
@IBMStorageSupt - IBM Storage Support https://twitter.com/ibm_eSupport
You can always tweet the @IBMdpr and we will find the right person to respond
Data protection and retention continues to be committed to new development in the Protectier family of products. I came across this release and thought it would be great to share it with my readers.
IBM, the leader in data protection, is among the first to combine flash technology with deduplication to raise the performance bar for today’s extreme backup workloads. Flash memory and its associated high performance micro latency technology is an effective tool not only for improving deduplication performance but for lowering the price-performance curve. The efficient use of Flash for just the metadata provides cost effective leverage to boost performance for systems of all sizes, providing higher levels of throughput at a greater degree of consistency. IBM is the first to combine these two technologies with its unique Gateway design to not only enhance performance but enable new levels of cost efficiency.
Benchmark data demonstrates that the use of flash can improve price-performance by 1.5x when compared to using disk for metadata. Furthermore, price performance can be improved by more than 2.5x for single stream performance which is important for IBM i and other smaller systems replacing small numbers of tape drives. New levels of price performance are brought about by IBM’s commitment to ProtecTIER , its non-hashing deduplication algorithm and IBM’s unique Gateway design. Moreover, the integration of flash can be incorporated as an upgrade to existing systems providing unparalleled investment protection. Unlike appliance based architectures used by other vendors, the Gateway design allows new technology to be seamlessly adapted without creating model obsolescence.
The integration of flash is just one recent example of IBM’s unwavering commitment to ProtecTIER. Deduplication remains a major innovation for reducing storage costs. IBM is leading the way for leveraging new technology to more aggressively lower storage costs for the enterprise.
The following of the "lost" Malaysian Airline 777 is now being tackled by Big data and Old Fashion analytics. One of the largest GeoDesic companies in the world, Digital Globe Inc, has sent out an open invitation to have groups of everyday people help review millions of images. (http://www.digitalglobeblog.com/)
The images are of millions of square miles of ocean taken after the disappearance of the Airlines. Digital Globe is maintaining these images on spinning storage to allow multiple reviews of the data by the reviewers at the same time. This is causing some issues with bandwidth at the website, but Digital Globe is working to get this worked out.
Take the time to read the Digital Globe Blog and the NPR article below. Get involved, you could be the one that finds the Clues!
Normally I stay pretty formal in the DP&R blogs, but today I want to write about the sun shine DP&R seems to bring everywhere it goes.
My travels over the last year have taken me around the world. being based in Tucson, Arizona, USA my years are filled with sunshine. 352 days of sunshine per year as a matter of fact. Now for some that is great but I love the diverse weather conditions. So i look forward to travelling to those locations where it is all "gloom"
So it is never a surprise when I get to locations like San Francisco in the rainy season or London or in the latest case Denmark in the winter, that I as a DP&R representative bring the sun!
When I arrived in denmark they had had very little sunshine and remarked when I arrived to the enjoyable rain, that they expected more from me. Well as it would happen on day 2 I woke to brisk breezes and rain. My first briefing began at 0930 to several jokes that not even DP&R could bring sunshine to Denmark this time of year.
At 1000 the sunshine arrived through the clouds, the clouds continued to part to blue skys and a warming room. needless to say the sunshine continued until nearly 1400. Once again DP&R brought sunshine where there was gloom, even if only for a short time.
So if it is gloomy where you are and the sun suddenly appears, don't worry DP&R is in the area!
Its sunny in Chicago today.
Check out the latest on helping small cusotmers with LTFS that I posted below.