The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
8GB of RAM (2x the amount in the N3600 and 4x the amount of the N3300)
512 mb NVRAM
2 integrated SAS ports and 8 total 1GBPS Ethernet port
PCI-e port for expansion
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
IBM released a new Data Ontap version last Friday along with some other minor releases but more about those later. Data Ontap 8 7mode was the first release of a new 64-bit architecture that will allow N series customers to take advantage of larger aggregates. A little history. Back about 8 years ago, Netapp purchased a company named Spinnaker for the use of their 64 bit code, global name space and some other odds and ends. For the most part, Netapp has been re-branding this code as their GX platform allowing customers who want the feature set to purchase it aside from their Data Ontap base. GX was not a heavy seller as it was complicated to install and much more pricey than the other brand and Netapp decided to co-mingle the two code streams into one. At first glance this sounds like a good idea. The Data Ontap code definitely had some limitations (small aggregates sizes, limited growth and no global name space) but the merging of the two streams was harder than Netapp imagined. This was shown by Netapp promising a release of the new merged code for over years and finally a release was available for testing. There were many bugs (as RC code can be) but Netapp worked through the majority of them to produce a stepping stone release of the merged code called 7 mode. The developers used bits and pieces of the GX code to get the 64-bit architecture allowing customers to build larger aggregates, up to 100TB in size. This was really important as the release of the 2 TB Sata drives were coming and the limitation of 16TB in an aggregate would of killed any performance on the system. With only 8 2TB drives in the aggregate, the maximum IOPs throughput would be limited to about 400 IOPS per 16TB of drive space, not a good ratio at all. Therefor having a larger aggregate size allows them to put up to 50 2TB drives achieving a more respectable 2500 IOPS per aggregate. Now that we have the 7 mode available, there are some upsides and some downsides. First, as stated above, the aggregate sizes have increased tremendously. Allowing for more data disks in the aggregate increases the amount of IOPs the filer can pool. On the downside of this news, we see that you can not simply flip a switch and increase an aggregate created in the old 32-bit code to a new 64-bit aggregate. Customers will have to create a new aggregate after upgrading to the 7-mode version of Data Ontap 8 and then migrate with some restore method (think DR restore from backup) on to the new space. You can not mirror the two as SnapMirror can only mirror between like for like aggregates (32-bit to 32-bit and 64-bit to 64-bit). No big deal if you are new customer or if the filer is a new addition to the filer farm, but for those existing customers I believe this will be a lot tougher. If you do not have the drive space to create a new 100TB or less aggregate, you will have to either wait to buy more disks or do a manual backup (not snapshot), destroy the existing aggregate, and build a new aggregate on the 64-bit code, then restore. This is and the fact this is the first release of the new code family, will be why customers will not adopt the new code very quickly. There are also some other gotchas like no support for Performance Accelerator Cards (PAMII), no real interoperability between the two code bases and more. When I was an administrator, I hated having to read the release notes for the 'fine print gotchas' but in this case I encourage everyone to read the notes thoroughly and perhaps engaging your local IBM Storage engineer to help you access if you are a good candidate to upgrade or not. The fact this is a stepping stone to the full code line does help customers that need to move to the 64-bit architecture today without slowing down Netapp's development team. They are working on the next release of Data Ontap 8 called cluster mode. This will be the code that allows customers to cluster more than one pair of systems under one global name space. I suspect this will be a great addition to the Data Ontap code line and will give Netapp more traction in the larger enterprise business. There were also some firmware releases for the EXN3000 shelf on Friday as well. For more information on what was released, visit www.ibm.com support page
Today IBM is releasing two new N6200 systems that will be a huge improvement over the existing N6000 systems. The two new systems will show a bump in capacity and performance and more flexibility. For a very crowded midrange market this new N series product set will bridge the gap between entry level and enterprise class systems
One of the biggest issues with the previous 6000 systems was the limited amount of PCI-e slots. The other issue is the lack of more common hardware onboard like SAS and 10gbps ethernet.
The first thing that stands out to me is the footprint of the new system. The older N6000 has a 6 U requirement for an HA pair. The new N6200 is half the size, only occupying 3U for the two HA pair, or a single node with a I/O expansion module, providing an additional four PCI-e cards. Another configuration is two controllers with two expansion modules in a total space of 6U (equal of the older N6000 systems) but with a total of 12 PCI-e slots (vs 8 on the older N6000).
We will recommend using the two slots built into the controller for high performance 10GbE and / or 8 Gb PC adapters. The additional expansion slots in the expansion module can be used for Flash Cache and other connectivity for disks.
The on-board hardware is getting an face lift as well. While the new system sports a 10GbE port this is used mainly for the interconnect and nothing else. This was one of the disappointments I have with this systems, but understand this is how Netapp will accomplish scale out clustering.
FC ports were kept at 4 Gbps but there is two new SAS ports with matching ACP (alternate control path) ports used for off loading some of the traffic from the data path.
One of the unsung updates was in the NVRAM. Instead of using the same memory in the past, we now see an improvement of the memory by using something called Asynchronous DRAM Refresh (ADR). This is a new self-refresh mode in the Intel chipset that allows a portion of the main memory to be backed by an on-board battery. This gives the NVRAM the same high bandwidth as main memory and also simplifies the design of the motherboard.
This gives the new N6200 systems a bump in performance along with the introduction of the new Intel processors. The SPECsfs benchmark on the highest N6200 system showed 101,183 ops at 1.66ms ORT compared to the N6060 showing 60,507 ops and 1.58ms ORT, an improvement of about 70% in SFS throughput.
IBM is introducing the IBM System Storage N6210 Series and the IBM System Storage N6240 Series These new systems replace the IBM N3600 and N6040 Series respectively. GA date is scheduled for January 28, 2011 (N6240) and February 25, 2011(N6210). Here is the slide deck that is published with the release.
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management 2. Storage Efficiency 3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors 4. Most importantly, see how we recover these applications in a matter of minutes!
This weekend I was working on moving some of my winter clothes and spring/summer clothes in and out of my closet and into containers. Last Fall I purchased a few plastic containers that sealed so I could put my short sleeve golf shirts away and some of my shorts. Here in North Carolina, we can get a mild day and it is nice to have a short sleeve shirt to wear. On these days I would go back to the containers and dig through the nicely folded items until I found the shirt I wanted. Sometimes I had to go through multiple containers because I had forgotten which one I had put it in a few months ago. This weekend when I pulled out the containers they were in a mess, nothing was folded and it took me more time trying to figure out what was what as they all were mixed up. I then wondered what if I bought a bigger container and instead of using multiple ones, I could use one large container to store all of my winter clothes? What would the issues be, would I have enough space to store the container? Would there be someway of indexing the clothing inside to quickly find what I was looking for? Was there a way to put some clothes that I would need in case of cool day in a separate container just in case I needed them? There seemed to be more issues with just using one larger container than I thought. It would be easy to dump all the clothes into the larger bin and claim victory but that did not help me down the road. I needed a system, something to help me consolidate efficiently while still giving me access to those things I used on occasion. I also had to keep in mind the space I was going to use in my storage area. I didn't want to buy one large container and not be able to get it in the space I already had allocated. I needed a flexible system, maybe a few boxes that had labels and I could get to quickly if I needed something inside. Take a look at some of the noise some of the storage vendors are making about data storage consolidation. Most of them are telling you we can come in and take your smaller boxes and dump the data into one big box. While that helps on saving you space and might keep you from administrating multiple storage devices, you need to look at the downside of just having one big pool of disks. A large storage system that is replacing multiple smaller systems will need more cache and processor power to handle the same load as before. If you want to move data around to different tiers of disks or tape, can you achieve that with the new system? I started down the road of buying the biggest container I could find but decided against it as it would be too much trouble to find things. Your data storage systems need to be flexible enough to have multiple storage pools so that data that is able to move off faster disk and make room for data that is accessed more frequently. This not only allows your clients to have better response times on files they frequently use, but it tells you how much 'real' data people are using in your data center. The other issue I had was I needed some type of labeling systeming or an index to tell me where the shirts were and were was my ski jacket, etc. Your data is much the same, you need to keep up with where data lives in the storage system. As our storage systems get larger, we need save the meta file data easily and keep it in a table so we can run queries against it. There is also last part of moving my clothes around that I hated the most, the purge. I went through and found the shirts that had been worn too many times or may not fit the same as when I bought them originally. I packed these in a cheap cardboard box and took them to a donation place. This is the same as getting rid of old data in your system. Old data that is not being accessed is costing you money. You not only have to pay the environmental cost of keeping those bits spinning, but its taking up room where new data could reside therefore costing you money to expand. True archive and purging of data will be needed for any system large or small. Make sure you find a system that is easy to work with and automates this process based on policy. In the end, if you are looking at consolidation of your data storage, there are multiple things you will need to find out about a system. Just because a bigger container can replace multiple smaller containers may not give you the flexibility needed to meet your changing needs. For more information on a better way to consolidate your storage platform and moving your data, check out the information on SONAS and TSM.
Every year IBM puts on a conference for all of our clients, business partners and strategic partners. This conference has both Storage and X series sessions along with key note speakers from the top management at IBM. People come from all over the world to this conference looking for the 'how to' answers and whats to come with the product lines. There is also a solution center that houses all of the products along with our sponsors. This year our top platinum sponsors are Cisco, Intel and Netapp. Other sponsors include Brocade, Emulex, Fuision-IO, VMWare, Red Hat and SUSE. I plan to be working in the solution center at the SONAS booth talking to clients about the benefits of SONAS and how it fits into their environments. If you are wanting to stop by here are the hours that I will be there:
Monday, July 18th Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks)
Tuesday, July 19th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center) Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks)
Wednesday, July 20th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
I also will be presenting a few sessions on NAS technology here at the conference. Most of my sessions will be a look at what IBM is doing with SONAS, N series and Real Time Compression. I have a NAS 101 class that I really love doing because there are so many people that have a misconception of what NAS is today. In my N series update session we will be talking about the latest release on N6270 and the EXP 3500 as well as a peek at the R23 release coming in a few weeks. The other two sessions I am doing are a little off the topic of NAS but around social media and using www.ibm.com for help. Tony Pearson, John Sing and Ian Wright will be joining me on a panel to discuss the roles we play in social media and what each of us thinks of the future of social media. The support session is something that a client suggested to me out of their frustration of how to find documents on our support pages. Here is a list of sessions and times that I will be presenting:
7/18 - 1:00 sSN14 Storage Networking (NAS - SAN) NAS 101: An Introduction to Network Attached Storage 7/19 - 10:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM 7/19 - 1:00 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New? 7/20 - 1:00 sGE10 General Tips and Tricks on Searching for Support Answers on ibm.com 7/20 - 5:30 sGE61 General Using Social Media in System Storage 7/21 - 10:30 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/21 - 2:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
If you are at the conference feel free to come to any of my sessions and I would love to hear from you about the IBMNAS blog or any of my social media outlets. We are using the following hashtag for the conference all week if you want to follow what is going on via twitter feed #ibmtechu
There is a demo coming up on January 20th that will show the integration of N series and VMware. The long awaited Virtual Storage Console and Rapid Cloning will be the highlights of the demo. So what is VSC? It is N series software that enables administrators to manage
and monitor storage side attributes of ESX-ESXi hosts.
VSC functions as a plugin to vCenter and uses APIs to
set and retrieve information from the array.
VSC adds a tab into vCenter and enables the following:
View Status of Storage Controllers
View Status of physical hosts, including versions and overall status
Check for the proper configuration of ESX settings as it applies to:
HBA driver timeouts
the ability to set the appropriate to set the appropriate timeouts on
multiple ESX hosts simultaneously with a single mouse click
Launch FilerView from within VSC for storage provisioning
Provides access to mbrtools (mbrscan, mbralign, mbrcreate) to identify and correct partition alignment issues
Ability to set credentials to access storage controllers
Ability to collect diagnostics from the ESX hosts, FC switches and Storage controllers
Top 10 Reasons clients choose to go with IBM N series
Some years ago I put together a list of reasons why people
choose to buy from IBM rather than purchase directly from Netapp.IBM has an OEM agreement with Netapp and
rebrands the FAS and V-series as their N series product line.They are both made at the same plant and the
only difference between them is the front bezel.You can even take a Netapp bezel off and
stick it on an N series box and it fits exactly.
The Software is the same exactly.All we change is the logos and readme
files.The entire functionality of the
product is exactly the same.IBM does
not add or take away any of the features built into the systems.The only difference is it takes IBM about 90
days once Netapp releases a product to get it put online and change the
Support for N series is done both at IBM and Netapp.Much like our other OEM partners, they stand
behind IBM as the developers and IBM handles the issues.Customers still call the same 1.800.IBM.SERV
for support and speak to trained engineers who have been working on N series
equipment for 6+ years now.IBM actually
has lower turn over than Netapp in their support division and has won awards
for providing top-notch support.The
call home features that most people are used to still go to Netapp via IBM
10.The IBM customer
engineer (CE) that is working with you today will be the same person who helps
you with the IBM N series system.
9.IBM GBS team can
provide consultation, installation and even administration of your environment.
8.IBM is able to
provide financing for clients.
7.When you purchase
your N series system from IBM, you can bundle it with servers, switches, other
storage and software.This gives you one
bill, one place to go to if you need anything and one support number to call.
6.IBM has two other
support offerings to help our clients, Our Supportline offering allows
customers to call in and ask installation or configuration questions.We also have an Enhanced Technical Support
(ETS) team that will assign a personal engineer that will know everything about
your environment and will provide you with everything you need.They will help you with health checks to be
sure the system is running optimally, updates on the latest technology and
single point of contact in case you need to speak to someone immediately.
5.IBM N series
warranty support is done by IBM technicians and engineers at Level 1 and Level
2.If your issue can not be resolved by
our Level 2 team they have a hotline into the Netapp Top Enterprise Account
team.This is a team only a few very
large Netapp accounts can afford and we provide this support to ALL of the IBM
N series accounts no matter how large or small.
4.Our support teams
from different platforms (X series, Power, TSM, DS, XiV, etc) all interact with
another and when tough issues come up we are able to scale to the size of the
issue.We can bring in experts that know
the SAN, Storage, Servers, and Software all under one umbrella.With those tough cases we assign a
coordinator to make sure the client does not have to call all of these
resources themselves.This person can
reach out to all the teams, assigns duties and will coordinate calls with you
3.All IBM N series
Hardware and Software undergoes an Open Source Committee who validates there
are no violations, copy right infringements or patent infringements.
2.All IBM N series
Hardware and Software is tested in our Tucson
testing facility for interoperability.We have a team of distinguished engineers who not only support N series
but other hardware and software platforms within in the IBM portfolio.
1.All IBM N series
equipment comes with a standard 3 year warranty for both Hardware and
Software.This warranty can be extended
beyond the three years as IBM supports equipment well beyond the normal 3-5
years of a system.
When it gets down to it, customers buy because they
happy.Since the systems are exactly the
same it comes down to what makes them happy.For some, the Netapp offering makes them happy because they like their
sales engineer, for others they like IBM because they have been doing business
with us for over 30 years.
For more information about IBM N series, check out our
landing page on http://www-03.ibm.com/systems/storage/network/
First, off I want to say what an awesome year IBM had in storage! We announced several new products and improvements to older ones. SONAS was one of the NAS product of 2010 at IBM. The idea that came from bringing a parallel file system and merging it with commodity parts is brilliant. People have been building these systems for years and having to deal with the issues of interoperability and supportability, can now focus more on making storage work for them. Real Time Compression was also released for the N series product. This was an acquisition that really helps IBM position compression technology in the NAS market. RTC today is an appliance that compresses the data into smaller packages with no performance degrade. I believe we will see more of this technology spread into other aspects of storage line.
The biggest storage announcement was definitely the introduction of a new mid-tier storage device, Storwize V7000. This device is based on the tried and true SVC code base with some new enterprise class features from our DS8000 line. This system has the cool XiV like interface and a very cool form factor and with things like easy-tier and disk virtualzation, the box is going to be hard to beat in 2011.
Second, I want to honor IBM as we celebrate our centennial year of business. The Computing Tabulating Record Company started on June 15, 1911 and while the name has changed and our products and services have changed, but our mission and dedication to our clients remains unchanged. So many of us do not even begin to understand the role IBM has made on our world as it is today. IBM has been well known through most of its recent history as one of the
world's largest computer companies and systems integrators. With over 388,000 employees worldwide, IBM is one of the largest and most profitable information technology employers in the world. IBM holds more patents than any other U.S. based technology company and has eight research laboratories worldwide. The company has scientists, engineers, consultants, and sales professionals in over 170 countries. IBM employees have earned Five Nobel Prizes, four Turning Awards, five National Medals of Technology and five National Medals of Science.
Lastly, I want to challenge everyone, IBMers, clients, everyone, to really look at what is going on in the storage space this year. With the explosive growth of data we are seeing people buying unprecedented amounts of storage. Most of the vendors are going to be investing in R & D for storage and coming out with new and time saving features. Clients should challenge their vendors to exceed their requirements not just make them. I also want vendors to look beyond products and start looking the services that help clients make better decisions and support the products they have purchased.
One of my favorite TV programs is the BBC show Top Gear. They go through and test cars not only for handling, looks, and cup holders but mainly for power. At the end they run all of the cars through the same test track and get a time. That time then gets recorded on their list of all the cars tested and is celebrated for achievement or scorned at for doing poorly. No matter what the car turns up, they were all treated equally.
Today, IBM is announcing a test done by a certain benchmark called SPECsfs. This has been the yardstick for all NAS vendors wanting to flex their muscles and show how they handle small block I/O. Vendors can bring how ever many drives and tweaks they want but the test itself is very rigid and has to be certified before the results are published. IBM put together a SONAS system consisting of 10 interface nodes and 8 storage pods with all SAS disk. A total of about 900TB of usable disk, and about 1/3 of the maximum SONAS configuration. There was no solid state disk or extra tweaks done just a SONAS system that you could order today. That said, the IBM SONAS set a new world record for performance for a single file system at 403,000 IOPS per second. Yes you read that right, 403k IOPS in a single file system. If you look at the other vendors they have used multiple file systems to aggregate the performance together in order to achieve a benchmark. Then they tend to use a virtual name space with software that is layered over all of the file systems, but here SONAS is one file system over 900TB with a true global name space. Some issues with multiple file system is they cannot stripe data across the file systems and the load balancing becomes an issue. If you look at the comparison of performance per file system, you can see that IBM is WAY beyond the competitors. So you maybe asking, "Yeah that's pretty cool but what was the response time?". According to the test, the average response time was 3.23 MS from 0 to 403k IOPs per second. This is extremely good and when you think that was coming from one file system of 900TB, you realize how good that number is compared to other results. There will be tons of vendors trying to debunk how IBM out performed them and how they have better software or better market share but it really boils down to these key points:
· An all-spinning SAS disk SONAS configuration, typical of SONAS configurations being installed today · Single file system featuring ease of use, minimum complexity, global load balancing, sharing of resources, proof of scale · 903 TB usable capacity is indicative of current real life customer scale out NAS requirements · An environment in which all applications would benefit from the single file system and benefit from the high IOPs and excellent response time · One can clearly correlate the SONAS SPECsfs benchmark with the response time received to a real world application by today’s SONAS
I have included the slide deck for the announcement below. Feel free to check out the information on the SPECsfs website.
There are few times that I look at what a company markets as the 'Next big thing' in the storage world and get the same reaction I got when I started learning about the SONAS product. There is already some technical details in the announcement and in Tony's blog from a few days ago so I wont go into that today, but I will go over how this product really makes a paradigm shift in the NAS storage world.
Traditionally NAS storage is looked as the little brother to the bigger systems of SAN. SAN systems tend to be the athletes of the storage high school with their matching letter jackets and oversized girth. All the while, NAS was the band geeks, some frail and thin and some over sized but always in large numbers and not very organized. NAS technology was born from the need to share data over he company and as the amount of information grew so did the servers, network bandwidth and backups. SAN storage is still the big guy on campus but the people that track trends for our industry say NAS has become just as important as the large databases, ERP systems and the like.
If you look at how we have stored NAS data, it has been on single file systems that had local disk drives shared out over a single 10/100 mb network. As storage systems became more advanced, we saw people using clustering, snapshots, thin provision, de-duplication and replication to help keep our companies communicating. When we needed more throughput or more storage we added a server or added disks which created islands of unshared power.
If you look at 2009 and one of the hottest buzz words in the storage market, it was cloud computing. Having a large source of power in one area to pull resources from without having to provision new equipment. We also saw more and more clients looking at NAS protocols as the Ethernet could support faster speeds than traditional fibre channel. A huge amount of you have been looking at and moving your virtual environments to NFS to help cut down on administration overhead and to take advantage of the CNA technology.
With a higher demand for NAS technology, comes the burden of being able to scale at the same rate the storage, network and throughput increases. Older NAS systems allowed clients to increase the amount of storage but once you reach the critical mass the system allowed you had to purchase another clustered system. This creates multiple islands of storage pools that have be managed, provisioned and backed up. Not a great solution for companies that are growing and have fewer administrators to do the work.
Now, IBM has a product that allows our NAS clients to grow and scale as their companies grow. SONAS is a highly scable NAS that works like a cloud. The underlying technology, GPFS, is the same found in some of the fastest computers in the world. SONAS uses a method of scaling in both storage and throughput by adding storage pods (60 SATA or SAS disks) or interface modules (x3650 servers) like Lego blocks. All of this is managed by a central command module that allows a client to have full control over the entire system no matter how much storage or servers are in the system.
So the "Next big thing" in my opinion is here today and IBM is using the best of the best of IBM research for it's clients. The SONAS solution is designed from the ground up as a true blue NAS storage solution. Look for future SONAS blogs on GPFS, creating an ILM strategy and more.
Move that File! You know that show were people are moved out of their old house, an army of contractors come in and build a new house, then the people come back and are astonished at their new home. I was watching an older episode the other night and released how much this improves a family's mobility, productiveness and state of mind. While, their old house was ok, it provided some what of a shelter, the new house was 100x better. I think of SONAS in the same way. There are many ways to do NAS technologies. Some take time to develop and build, but others are just as effective with little to no planning. I was talking to a client the other day and his response to NAS was to put NFS servers in all of their locations. It's cheap and something they can repeat like a cookie cutter many times over. What he was not taking in to his planning was administrating all of these islands of storage and how much he was spending on data sitting on expensive disk. If he was able to consolidate these servers and have a way of moving data around and eventually off to the greenest storage media out there, tape, then how much more money and time would that save him? He didn't have an answer but we are working on plan for him today. IBM announced yesterday that SONAS version 1.1.1 will now support ILM tiering with GPFS and then moving data off to tape using Tivoli's Storage Manger HSM. These two work in concert with the policy manager on the SONAS system to move data on and out of pools based on the meta data properties. As discussed in previous posts, SONAS separates the meta data which allows the scan engine to pass the needed data on to the ILM or TSM agents. These agents then move data between the pools and allows the client to free up space on valuable spinning disks. If you are one of the people that says tape and tiering is not needed, then think about the idea of putting data that hasn't been touched on a disk that costs $0.03 per GB. Its not that your storage isn't cool and you may not need tiers for your high performance, but what if the only data that was on the system was data that was actively being used and not my old spreadsheet from 2009. Along with the ILM announcement, IBM released the following with version 1.1.1
SONAS with IBM XIV storage
Higher capacity SAS drives
HTTPS protocol support
Network Information Service (NIS) support
I will post more information this week and next week on the replication and the XiV integration.
There is an ancient proverb that says " When you have only two pennies left in the world, buy a loaf of bread with one, and a lily with the other.". There is some wisdom in this old saying that we can still apply to today's IT budget and strategy. If you have been keeping up with the news, you would know companies are starting to invest again in their IT hardware and software. This maybe the turn in some of the hardest times in the hardware business. But what are customers really buying and planning to buy with their dollars? What is my bread and what is my lily today? The bread represents nourishment of the body. We have to eat in order to keep going. With out it, we starve and eventually die. This would be the basic part of a business IT strategy. What do you have to do to keep the lights on? I have this conversation with IT planners all the time. People love to do the newest and greatest, but have a smaller understanding or take for granted the things they have to do to keep the business going. The lily is a beautiful and majestic flower. Dating as far back as 1580 B.C., when images of lilies were discovered
in a villa in Crete, these majestic flowers have long held a role in
ancient mythology. Derived from the Greek word “leiron,” (generally
assumed to refer to the white Madonna lily), the lily was so revered by
the Greeks that they believed it sprouted from the milk of Hera, the
queen of the gods.
The storage market is evolving with the help of cloud storage, unified platforms and consolidation. IT planners and CIOs are dealing with a new way of putting value to these terms and offering their business units a charge back model not only based on data consumption but throughput and retention. The smarter businesses are seeing that running multiple storage platforms with trapped efficiency does not work in today's data center. Storage has to be big, wide and easy to use. Long gone are the days where 10-25 TB were a big deal. We now see systems that start at those levels and go to infinite proportions. Networks are becoming faster and even consolidated with 10/20 gbps driving protocols like FCoE and iSCSI. Backups are being replaced by better replication algorithms that have quality of service levels and automated failover.
NAS storage can take advantage of these technologies that can also help you keep the lights on. Most businesses have some form of NAS storage to help employees share documents, spreadsheets, images, and what nots. There is a movement from the traditional block based systems to unstructured data sets using NAS and these are pushing the market and vendors to come up with better NAS products. Companies like Amazon, Facebook, Twitter, all push vendors to think about how they do storage.
So how are you planning your IT spending are you going to spend more on things that you have to have or will you spend more on the things that look nice? I suspect in most cases there will be an 80/20 split of bread to lily ratio. But how you classify what is needed and what is 'nice to have' in your IT department will change as your business changes this year. Businesses are putting more demand on IT with fewer resources. Even though there is evidence businesses are spending more the hardware recently, the resources (admins) are still not there. The only way companies will be able to achieve success with such a high demand on storage with out the resources is to have simple, scalable storage that allows single admins to manage multiple petabytes of storage.
IBM is working to help customers achieve this type of new IT department. Cloud is one way, either public or even private, but also from a basic system level. Interfaces that are less complicated like the V7000 or XiV allow admins to move easily with out much training. SONAS offers large scale out NAS storage where storage and throughput can be scaled independently.
This year, take time to figure out what is needed and what will be cool to have in your department. Technology will always change, even if its a change back to what we had 20 years ago (mainframe/virtualization). Keep in mind it might look like a lily today, but will be a loaf soon, where do you want to be when the business needs it.