The old adage of faster, smaller, cheaper has been revived in the N series product line. This week (officially) IBM released the information around the highly anticipated OEM re-brand of Netapp's FAS 2040; the N3400. This system has a small 2U form factor but delivers higher performance than its beefier brother the N3600. If you want to see a full comparison of the three boxes, click here for more information.
IBM has three systems that round out the entry level or departmental storage platform. The N3600, the N3300 and now the N3400. All three are based on internal drives with some expansion to a few shelves as needed. The N3600 comes with 20 internal drives and the smaller N3300 and N3400 comes with only 12 internal disks and can expand to a maximum capacity of 136TB. There are two controllers that allow administrators to have a high availability solution for low cost. This makes the system more attractive as it also supports FCP, iSCSI, CIFS and NFS all from one platform.
The N3400 does have a few things I want to point out:
8GB of RAM (2x the amount in the N3600 and 4x the amount of the N3300)
512 mb NVRAM
2 integrated SAS ports and 8 total 1GBPS Ethernet port
PCI-e port for expansion
All of these help set this box up for an important role within your datacenter. If you compare this system with other storage systems in the market, you find the new N3400 is well stacked and can compete even with larger mid-tier systems. This box is ideal for our SMB clients who really need the all in one system with the horsepower to keep up with a growing company. The system is a long way from the first entry level system IBM decided to roll out, the N3700. If the two were to be compared the N3700 would be a 'Happy Meal' and the N3400 would be a super sized 2lb Angus burger with fries and shake, maybe even an apple pie.
This new system is considered ideal for both Windows consolidation and virtual environments alike. With the additional ports the system does leverage a larger life span as the new EXN 3000 SAS shelves are becoming more of the standard for the N series product line. The system on the other hand does not support 10GBPS cards or FCoE as the N3600 does. But as all N series systems support the same Data Ontap code, the robust system uses the same commands, interface and is built on the same technology as the other N60x0 and N7X000 lines.
Overall, this is an enhanced refresh of the exisitng N3300 with more ability to scale with currently technologies. The performance will be more than the N3600 which begs the question of the need for the N3300/N3600 systems. I suspect as Data Ontap 8 becomes general available from Netapp, there will be more entry level storage devices released.
For more information on the N3400 and all other N series related information, follow this link or contact your local IBM Storage Rep.
IBM released a new Data Ontap version last Friday along with some other minor releases but more about those later. Data Ontap 8 7mode was the first release of a new 64-bit architecture that will allow N series customers to take advantage of larger aggregates. A little history. Back about 8 years ago, Netapp purchased a company named Spinnaker for the use of their 64 bit code, global name space and some other odds and ends. For the most part, Netapp has been re-branding this code as their GX platform allowing customers who want the feature set to purchase it aside from their Data Ontap base. GX was not a heavy seller as it was complicated to install and much more pricey than the other brand and Netapp decided to co-mingle the two code streams into one. At first glance this sounds like a good idea. The Data Ontap code definitely had some limitations (small aggregates sizes, limited growth and no global name space) but the merging of the two streams was harder than Netapp imagined. This was shown by Netapp promising a release of the new merged code for over years and finally a release was available for testing. There were many bugs (as RC code can be) but Netapp worked through the majority of them to produce a stepping stone release of the merged code called 7 mode. The developers used bits and pieces of the GX code to get the 64-bit architecture allowing customers to build larger aggregates, up to 100TB in size. This was really important as the release of the 2 TB Sata drives were coming and the limitation of 16TB in an aggregate would of killed any performance on the system. With only 8 2TB drives in the aggregate, the maximum IOPs throughput would be limited to about 400 IOPS per 16TB of drive space, not a good ratio at all. Therefor having a larger aggregate size allows them to put up to 50 2TB drives achieving a more respectable 2500 IOPS per aggregate. Now that we have the 7 mode available, there are some upsides and some downsides. First, as stated above, the aggregate sizes have increased tremendously. Allowing for more data disks in the aggregate increases the amount of IOPs the filer can pool. On the downside of this news, we see that you can not simply flip a switch and increase an aggregate created in the old 32-bit code to a new 64-bit aggregate. Customers will have to create a new aggregate after upgrading to the 7-mode version of Data Ontap 8 and then migrate with some restore method (think DR restore from backup) on to the new space. You can not mirror the two as SnapMirror can only mirror between like for like aggregates (32-bit to 32-bit and 64-bit to 64-bit). No big deal if you are new customer or if the filer is a new addition to the filer farm, but for those existing customers I believe this will be a lot tougher. If you do not have the drive space to create a new 100TB or less aggregate, you will have to either wait to buy more disks or do a manual backup (not snapshot), destroy the existing aggregate, and build a new aggregate on the 64-bit code, then restore. This is and the fact this is the first release of the new code family, will be why customers will not adopt the new code very quickly. There are also some other gotchas like no support for Performance Accelerator Cards (PAMII), no real interoperability between the two code bases and more. When I was an administrator, I hated having to read the release notes for the 'fine print gotchas' but in this case I encourage everyone to read the notes thoroughly and perhaps engaging your local IBM Storage engineer to help you access if you are a good candidate to upgrade or not. The fact this is a stepping stone to the full code line does help customers that need to move to the 64-bit architecture today without slowing down Netapp's development team. They are working on the next release of Data Ontap 8 called cluster mode. This will be the code that allows customers to cluster more than one pair of systems under one global name space. I suspect this will be a great addition to the Data Ontap code line and will give Netapp more traction in the larger enterprise business. There were also some firmware releases for the EXN3000 shelf on Friday as well. For more information on what was released, visit www.ibm.com support page
Today IBM is releasing two new N6200 systems that will be a huge improvement over the existing N6000 systems. The two new systems will show a bump in capacity and performance and more flexibility. For a very crowded midrange market this new N series product set will bridge the gap between entry level and enterprise class systems
One of the biggest issues with the previous 6000 systems was the limited amount of PCI-e slots. The other issue is the lack of more common hardware onboard like SAS and 10gbps ethernet.
The first thing that stands out to me is the footprint of the new system. The older N6000 has a 6 U requirement for an HA pair. The new N6200 is half the size, only occupying 3U for the two HA pair, or a single node with a I/O expansion module, providing an additional four PCI-e cards. Another configuration is two controllers with two expansion modules in a total space of 6U (equal of the older N6000 systems) but with a total of 12 PCI-e slots (vs 8 on the older N6000).
We will recommend using the two slots built into the controller for high performance 10GbE and / or 8 Gb PC adapters. The additional expansion slots in the expansion module can be used for Flash Cache and other connectivity for disks.
The on-board hardware is getting an face lift as well. While the new system sports a 10GbE port this is used mainly for the interconnect and nothing else. This was one of the disappointments I have with this systems, but understand this is how Netapp will accomplish scale out clustering.
FC ports were kept at 4 Gbps but there is two new SAS ports with matching ACP (alternate control path) ports used for off loading some of the traffic from the data path.
One of the unsung updates was in the NVRAM. Instead of using the same memory in the past, we now see an improvement of the memory by using something called Asynchronous DRAM Refresh (ADR). This is a new self-refresh mode in the Intel chipset that allows a portion of the main memory to be backed by an on-board battery. This gives the NVRAM the same high bandwidth as main memory and also simplifies the design of the motherboard.
This gives the new N6200 systems a bump in performance along with the introduction of the new Intel processors. The SPECsfs benchmark on the highest N6200 system showed 101,183 ops at 1.66ms ORT compared to the N6060 showing 60,507 ops and 1.58ms ORT, an improvement of about 70% in SFS throughput.
IBM is introducing the IBM System Storage N6210 Series and the IBM System Storage N6240 Series These new systems replace the IBM N3600 and N6040 Series respectively. GA date is scheduled for January 28, 2011 (N6240) and February 25, 2011(N6210). Here is the slide deck that is published with the release.
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management 2. Storage Efficiency 3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors 4. Most importantly, see how we recover these applications in a matter of minutes!
Top 10 Reasons clients choose to go with IBM N series
Some years ago I put together a list of reasons why people
choose to buy from IBM rather than purchase directly from Netapp.IBM has an OEM agreement with Netapp and
rebrands the FAS and V-series as their N series product line.They are both made at the same plant and the
only difference between them is the front bezel.You can even take a Netapp bezel off and
stick it on an N series box and it fits exactly.
The Software is the same exactly.All we change is the logos and readme
files.The entire functionality of the
product is exactly the same.IBM does
not add or take away any of the features built into the systems.The only difference is it takes IBM about 90
days once Netapp releases a product to get it put online and change the
Support for N series is done both at IBM and Netapp.Much like our other OEM partners, they stand
behind IBM as the developers and IBM handles the issues.Customers still call the same 1.800.IBM.SERV
for support and speak to trained engineers who have been working on N series
equipment for 6+ years now.IBM actually
has lower turn over than Netapp in their support division and has won awards
for providing top-notch support.The
call home features that most people are used to still go to Netapp via IBM
10.The IBM customer
engineer (CE) that is working with you today will be the same person who helps
you with the IBM N series system.
9.IBM GBS team can
provide consultation, installation and even administration of your environment.
8.IBM is able to
provide financing for clients.
7.When you purchase
your N series system from IBM, you can bundle it with servers, switches, other
storage and software.This gives you one
bill, one place to go to if you need anything and one support number to call.
6.IBM has two other
support offerings to help our clients, Our Supportline offering allows
customers to call in and ask installation or configuration questions.We also have an Enhanced Technical Support
(ETS) team that will assign a personal engineer that will know everything about
your environment and will provide you with everything you need.They will help you with health checks to be
sure the system is running optimally, updates on the latest technology and
single point of contact in case you need to speak to someone immediately.
5.IBM N series
warranty support is done by IBM technicians and engineers at Level 1 and Level
2.If your issue can not be resolved by
our Level 2 team they have a hotline into the Netapp Top Enterprise Account
team.This is a team only a few very
large Netapp accounts can afford and we provide this support to ALL of the IBM
N series accounts no matter how large or small.
4.Our support teams
from different platforms (X series, Power, TSM, DS, XiV, etc) all interact with
another and when tough issues come up we are able to scale to the size of the
issue.We can bring in experts that know
the SAN, Storage, Servers, and Software all under one umbrella.With those tough cases we assign a
coordinator to make sure the client does not have to call all of these
resources themselves.This person can
reach out to all the teams, assigns duties and will coordinate calls with you
3.All IBM N series
Hardware and Software undergoes an Open Source Committee who validates there
are no violations, copy right infringements or patent infringements.
2.All IBM N series
Hardware and Software is tested in our Tucson
testing facility for interoperability.We have a team of distinguished engineers who not only support N series
but other hardware and software platforms within in the IBM portfolio.
1.All IBM N series
equipment comes with a standard 3 year warranty for both Hardware and
Software.This warranty can be extended
beyond the three years as IBM supports equipment well beyond the normal 3-5
years of a system.
When it gets down to it, customers buy because they
happy.Since the systems are exactly the
same it comes down to what makes them happy.For some, the Netapp offering makes them happy because they like their
sales engineer, for others they like IBM because they have been doing business
with us for over 30 years.
For more information about IBM N series, check out our
landing page on http://www-03.ibm.com/systems/storage/network/
Every year IBM puts on a conference for all of our clients, business partners and strategic partners. This conference has both Storage and X series sessions along with key note speakers from the top management at IBM. People come from all over the world to this conference looking for the 'how to' answers and whats to come with the product lines. There is also a solution center that houses all of the products along with our sponsors. This year our top platinum sponsors are Cisco, Intel and Netapp. Other sponsors include Brocade, Emulex, Fuision-IO, VMWare, Red Hat and SUSE. I plan to be working in the solution center at the SONAS booth talking to clients about the benefits of SONAS and how it fits into their environments. If you are wanting to stop by here are the hours that I will be there:
Monday, July 18th Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks)
Tuesday, July 19th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center) Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks)
Wednesday, July 20th Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
I also will be presenting a few sessions on NAS technology here at the conference. Most of my sessions will be a look at what IBM is doing with SONAS, N series and Real Time Compression. I have a NAS 101 class that I really love doing because there are so many people that have a misconception of what NAS is today. In my N series update session we will be talking about the latest release on N6270 and the EXP 3500 as well as a peek at the R23 release coming in a few weeks. The other two sessions I am doing are a little off the topic of NAS but around social media and using www.ibm.com for help. Tony Pearson, John Sing and Ian Wright will be joining me on a panel to discuss the roles we play in social media and what each of us thinks of the future of social media. The support session is something that a client suggested to me out of their frustration of how to find documents on our support pages. Here is a list of sessions and times that I will be presenting:
7/18 - 1:00 sSN14 Storage Networking (NAS - SAN) NAS 101: An Introduction to Network Attached Storage 7/19 - 10:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM 7/19 - 1:00 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New? 7/20 - 1:00 sGE10 General Tips and Tricks on Searching for Support Answers on ibm.com 7/20 - 5:30 sGE61 General Using Social Media in System Storage 7/21 - 10:30 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/21 - 2:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
If you are at the conference feel free to come to any of my sessions and I would love to hear from you about the IBMNAS blog or any of my social media outlets. We are using the following hashtag for the conference all week if you want to follow what is going on via twitter feed #ibmtechu
There is a demo coming up on January 20th that will show the integration of N series and VMware. The long awaited Virtual Storage Console and Rapid Cloning will be the highlights of the demo. So what is VSC? It is N series software that enables administrators to manage
and monitor storage side attributes of ESX-ESXi hosts.
VSC functions as a plugin to vCenter and uses APIs to
set and retrieve information from the array.
VSC adds a tab into vCenter and enables the following:
View Status of Storage Controllers
View Status of physical hosts, including versions and overall status
Check for the proper configuration of ESX settings as it applies to:
HBA driver timeouts
the ability to set the appropriate to set the appropriate timeouts on
multiple ESX hosts simultaneously with a single mouse click
Launch FilerView from within VSC for storage provisioning
Provides access to mbrtools (mbrscan, mbralign, mbrcreate) to identify and correct partition alignment issues
Ability to set credentials to access storage controllers
Ability to collect diagnostics from the ESX hosts, FC switches and Storage controllers
May 9th has been a target on my calendar for some
time now. Inside of IBM, we have been
waiting for this day to come so we could talk about the new things being
released in the storage platform. It
almost feels like Christmas morning with a bunch of new presents under the
tree. Each gift has inside something
that is either really cool or something very useful.The only difference is your Aunt Matilda and
her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the
entire storage platform. I will
concentrate on just the IBM NAS ones but if you are interested in knowing what
is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of
gifts for him under the tree this morning. Not only did he find presents under the tree
but there were a few little things in his stocking. Here is what Santa brought:
hardware update on the X3650 nodes. Just like before, the SONAS system uses
the impressive workhorse but now it uses the more powerful M3 class with a
six core Xeon Intel 2.66GHz processor. It has 24GB of DDR3 RAM with the option
to increase to a total of 144 GB of DDR3 RAM per interface node. Also new with the X3650 is the option to
include a second processor to double the amount of cores to 12 total per
under the tree is new support for not only XiV but now SONAS supports the
SVC and V7000 as disk subsystems. This
is a huge gift because now SONAS can support tons of other storage under
the awesome virtualization of the SVC code. V7000 support is also interesting as that
platform has the virtualization code from SVC but also support its own
drive architecture including solid state drives.
same category of sweaters, SONAS gets a little smaller rack extender.In the past IBM has used a 16 inch
extender in order to accommodate the large 60 drive disk enclosure.That
has now been trimmed down to only 8 inches and 0 for the gateway model and
RXC rack that houses only interface nodes.
gets a new file system upgrade to GPFS 3.4 PTF4. This will provide a significant performance
improvement over the R1.1.1x release. The updated file system handles small
file and random I/O a lot more efficiently. With this update we now use the role of
manager nodes instead of interface nodes to gain more flexibility in how
we track data in cache.
gifts SONAS received were new support for NDMP, Anti-virus support, use of
both 10GbE ports on the same CNA and some power updates for the EU countries.
And along with all of that, there
is a new performance monitoring package called Perfcol that collects more
information for analysis.
This SONAS release is labeled R1.2 and can be obtained by
contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few
gifts.A new N6270 to replace the
N6070.This new system is in line with
the N6200 series with larger amounts of RAM and processors.Just like the smaller N6240, there is an
expansion controller where customers can add more PCI control cards like HBAs,
10GbE or even FCoE.A new disk shelf was
also released which uses the smaller 2.5 inch drives with improved back end
And over at the Real Time Compression house they got new
support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as
these were just a fraction of the Storage announcements today.Also today is the IBM Storage Executive
Summit in New York City.My friend and
fellow blogger Tony Pearson is covering this great event and will be updating
his blog and twitter feed.If you were
not able to make it to NYC for the event, feel free to tweet him your questions
@az990tonyYou can also send questions
to our IBM Storage feed at @ibmstorage
This weekend I was working on moving some of my winter clothes and spring/summer clothes in and out of my closet and into containers. Last Fall I purchased a few plastic containers that sealed so I could put my short sleeve golf shirts away and some of my shorts. Here in North Carolina, we can get a mild day and it is nice to have a short sleeve shirt to wear. On these days I would go back to the containers and dig through the nicely folded items until I found the shirt I wanted. Sometimes I had to go through multiple containers because I had forgotten which one I had put it in a few months ago. This weekend when I pulled out the containers they were in a mess, nothing was folded and it took me more time trying to figure out what was what as they all were mixed up. I then wondered what if I bought a bigger container and instead of using multiple ones, I could use one large container to store all of my winter clothes? What would the issues be, would I have enough space to store the container? Would there be someway of indexing the clothing inside to quickly find what I was looking for? Was there a way to put some clothes that I would need in case of cool day in a separate container just in case I needed them? There seemed to be more issues with just using one larger container than I thought. It would be easy to dump all the clothes into the larger bin and claim victory but that did not help me down the road. I needed a system, something to help me consolidate efficiently while still giving me access to those things I used on occasion. I also had to keep in mind the space I was going to use in my storage area. I didn't want to buy one large container and not be able to get it in the space I already had allocated. I needed a flexible system, maybe a few boxes that had labels and I could get to quickly if I needed something inside. Take a look at some of the noise some of the storage vendors are making about data storage consolidation. Most of them are telling you we can come in and take your smaller boxes and dump the data into one big box. While that helps on saving you space and might keep you from administrating multiple storage devices, you need to look at the downside of just having one big pool of disks. A large storage system that is replacing multiple smaller systems will need more cache and processor power to handle the same load as before. If you want to move data around to different tiers of disks or tape, can you achieve that with the new system? I started down the road of buying the biggest container I could find but decided against it as it would be too much trouble to find things. Your data storage systems need to be flexible enough to have multiple storage pools so that data that is able to move off faster disk and make room for data that is accessed more frequently. This not only allows your clients to have better response times on files they frequently use, but it tells you how much 'real' data people are using in your data center. The other issue I had was I needed some type of labeling systeming or an index to tell me where the shirts were and were was my ski jacket, etc. Your data is much the same, you need to keep up with where data lives in the storage system. As our storage systems get larger, we need save the meta file data easily and keep it in a table so we can run queries against it. There is also last part of moving my clothes around that I hated the most, the purge. I went through and found the shirts that had been worn too many times or may not fit the same as when I bought them originally. I packed these in a cheap cardboard box and took them to a donation place. This is the same as getting rid of old data in your system. Old data that is not being accessed is costing you money. You not only have to pay the environmental cost of keeping those bits spinning, but its taking up room where new data could reside therefore costing you money to expand. True archive and purging of data will be needed for any system large or small. Make sure you find a system that is easy to work with and automates this process based on policy. In the end, if you are looking at consolidation of your data storage, there are multiple things you will need to find out about a system. Just because a bigger container can replace multiple smaller containers may not give you the flexibility needed to meet your changing needs. For more information on a better way to consolidate your storage platform and moving your data, check out the information on SONAS and TSM.
Now available is the IBM System Storage N series with VMware vSphere
Redbooks are a great way of learning a new technology or a reference
for configuration. I have used them for years not just in storage but
for X series servers and for software like TSM. The people that write
the books spend a great deal of time putting them together and I believe
most of them are written by volunteers.
This is the third edition of this Redbook and if you have read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1 environments
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction of
both the N series systems and VMware vSphere. There are sections on
installing the systems, deploying the LUNs and recovery. After going
through this Redbook, you will have a better understanding of a complete
and protected VMware system. If you need help with how to size your
hardware there is a section for you. If you are looking to test how to
run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making sure you
have proper alignment between the system block and the storage array.
This will negatively impact the system by a factor of 2 in most random
reads/writes as two blocks will be required for one request. To avoid
this costly mistake or to correct VMs you have already setup a section
in the book called Partition alignment walks you through the entire
process of correctly setting the alignment or fixing the older systems
Another area that I will point out is the use of deduplication,
compression and cloning to drive the efficiency of the storage higher.
These software features allow customers to store more systems on the
storage array than if they used traditional hard drives. Also there is
how to use snapshots for cloning, mirrors for Site Recovery Manager and
long term storage aka Snapvaults. At the end of the book are some
examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene,
there is a great guide that will help you from start to finish setting
up your vSphere environment. The information is there, use the search
feature or sit down on a Friday with a high-lighter, which ever fits
your style and learn a little about using a N series system with VMware.
There is an ancient proverb that says " When you have only two pennies left in the world, buy a loaf of bread with one, and a lily with the other.". There is some wisdom in this old saying that we can still apply to today's IT budget and strategy. If you have been keeping up with the news, you would know companies are starting to invest again in their IT hardware and software. This maybe the turn in some of the hardest times in the hardware business. But what are customers really buying and planning to buy with their dollars? What is my bread and what is my lily today? The bread represents nourishment of the body. We have to eat in order to keep going. With out it, we starve and eventually die. This would be the basic part of a business IT strategy. What do you have to do to keep the lights on? I have this conversation with IT planners all the time. People love to do the newest and greatest, but have a smaller understanding or take for granted the things they have to do to keep the business going. The lily is a beautiful and majestic flower. Dating as far back as 1580 B.C., when images of lilies were discovered
in a villa in Crete, these majestic flowers have long held a role in
ancient mythology. Derived from the Greek word “leiron,” (generally
assumed to refer to the white Madonna lily), the lily was so revered by
the Greeks that they believed it sprouted from the milk of Hera, the
queen of the gods.
The storage market is evolving with the help of cloud storage, unified platforms and consolidation. IT planners and CIOs are dealing with a new way of putting value to these terms and offering their business units a charge back model not only based on data consumption but throughput and retention. The smarter businesses are seeing that running multiple storage platforms with trapped efficiency does not work in today's data center. Storage has to be big, wide and easy to use. Long gone are the days where 10-25 TB were a big deal. We now see systems that start at those levels and go to infinite proportions. Networks are becoming faster and even consolidated with 10/20 gbps driving protocols like FCoE and iSCSI. Backups are being replaced by better replication algorithms that have quality of service levels and automated failover.
NAS storage can take advantage of these technologies that can also help you keep the lights on. Most businesses have some form of NAS storage to help employees share documents, spreadsheets, images, and what nots. There is a movement from the traditional block based systems to unstructured data sets using NAS and these are pushing the market and vendors to come up with better NAS products. Companies like Amazon, Facebook, Twitter, all push vendors to think about how they do storage.
So how are you planning your IT spending are you going to spend more on things that you have to have or will you spend more on the things that look nice? I suspect in most cases there will be an 80/20 split of bread to lily ratio. But how you classify what is needed and what is 'nice to have' in your IT department will change as your business changes this year. Businesses are putting more demand on IT with fewer resources. Even though there is evidence businesses are spending more the hardware recently, the resources (admins) are still not there. The only way companies will be able to achieve success with such a high demand on storage with out the resources is to have simple, scalable storage that allows single admins to manage multiple petabytes of storage.
IBM is working to help customers achieve this type of new IT department. Cloud is one way, either public or even private, but also from a basic system level. Interfaces that are less complicated like the V7000 or XiV allow admins to move easily with out much training. SONAS offers large scale out NAS storage where storage and throughput can be scaled independently.
This year, take time to figure out what is needed and what will be cool to have in your department. Technology will always change, even if its a change back to what we had 20 years ago (mainframe/virtualization). Keep in mind it might look like a lily today, but will be a loaf soon, where do you want to be when the business needs it.
you expect more out of your storage? IBM thinks you should and is
putting their money where their mouth is. In the past it has gone under
different names like STG University and Storage Symposium, but now IBM
has revamped its premier storage conference. The big announcement came
today with much fanfare that included a new website, some videos and
bunch of hype on twitter. A three part conference for executives, gear
heads and business partners there is something for everyone. But what
will be different tham years in the past? I think IBM looked around how
other vendors use conferences to help pump up its customer base
(VMWorld, EMCwhatever) and decided to put some hype in the conference.
of this as a great place to go and network, learn and have a good
time. The conference will be in Orlando and there will be tons of time
to sit in class rooms and learn about the latest technologies but there
will be sessions where IBM will be pulling in our top execs and analysts
to tell you where IBM is going in the storage world.
Executive Edge will feature different speakers from Jeff Jonas, Aviad
Offer and IT Finance expert Calvin Braunstein. This track will take
executives through new announcements, deep dives on technical platforms,
one on one sessions with IBM Execs and some great entertainment. This
is a new feature of the conference as in the past it was more geared
towards the technical teams.
Of course the Executive Edge will be
limited so talk to your local storage sales person to get a chance to be
a part of this special event. There will be time to bring in your team
and have special sessions and round tables with the IBM engineers who
can help you find your way down this path of crazy storage growth. And
there is a golf course on site which I have heard is very nice. Bring
your clubs or rent them, I am sure there will be plenty of us out there
so find a partner and have a good time.
More importantly IBM is
making the effort to step up the event and have it on par with the other
IBM conferences like Pulse. The technical portion will have over 250
sessions on storage related topics. You will also get road-map
information from the product teams as well as a chance to become a
certified technician. One area that has been expanding is our hands on
labs and this year we will have the biggest one yet. You will be able
to come in to the labs and actually see our storage systems and have a
chance to 'test drive' them.
Early bird registration is open now
and you can sign up today. The conference will be in sunny Orlando
Florida at the Waldorf Astoria and Hilton Orlando at Bonnet Creek. The
event starts on June 4th and runs to the 8th. You can follow the
conference on twitter @IBMEdge and use the hashtag #ibmedge For the conference website go here
First, off I want to say what an awesome year IBM had in storage! We announced several new products and improvements to older ones. SONAS was one of the NAS product of 2010 at IBM. The idea that came from bringing a parallel file system and merging it with commodity parts is brilliant. People have been building these systems for years and having to deal with the issues of interoperability and supportability, can now focus more on making storage work for them. Real Time Compression was also released for the N series product. This was an acquisition that really helps IBM position compression technology in the NAS market. RTC today is an appliance that compresses the data into smaller packages with no performance degrade. I believe we will see more of this technology spread into other aspects of storage line.
The biggest storage announcement was definitely the introduction of a new mid-tier storage device, Storwize V7000. This device is based on the tried and true SVC code base with some new enterprise class features from our DS8000 line. This system has the cool XiV like interface and a very cool form factor and with things like easy-tier and disk virtualzation, the box is going to be hard to beat in 2011.
Second, I want to honor IBM as we celebrate our centennial year of business. The Computing Tabulating Record Company started on June 15, 1911 and while the name has changed and our products and services have changed, but our mission and dedication to our clients remains unchanged. So many of us do not even begin to understand the role IBM has made on our world as it is today. IBM has been well known through most of its recent history as one of the
world's largest computer companies and systems integrators. With over 388,000 employees worldwide, IBM is one of the largest and most profitable information technology employers in the world. IBM holds more patents than any other U.S. based technology company and has eight research laboratories worldwide. The company has scientists, engineers, consultants, and sales professionals in over 170 countries. IBM employees have earned Five Nobel Prizes, four Turning Awards, five National Medals of Technology and five National Medals of Science.
Lastly, I want to challenge everyone, IBMers, clients, everyone, to really look at what is going on in the storage space this year. With the explosive growth of data we are seeing people buying unprecedented amounts of storage. Most of the vendors are going to be investing in R & D for storage and coming out with new and time saving features. Clients should challenge their vendors to exceed their requirements not just make them. I also want vendors to look beyond products and start looking the services that help clients make better decisions and support the products they have purchased.
There are few times that I look at what a company markets as the 'Next big thing' in the storage world and get the same reaction I got when I started learning about the SONAS product. There is already some technical details in the announcement and in Tony's blog from a few days ago so I wont go into that today, but I will go over how this product really makes a paradigm shift in the NAS storage world.
Traditionally NAS storage is looked as the little brother to the bigger systems of SAN. SAN systems tend to be the athletes of the storage high school with their matching letter jackets and oversized girth. All the while, NAS was the band geeks, some frail and thin and some over sized but always in large numbers and not very organized. NAS technology was born from the need to share data over he company and as the amount of information grew so did the servers, network bandwidth and backups. SAN storage is still the big guy on campus but the people that track trends for our industry say NAS has become just as important as the large databases, ERP systems and the like.
If you look at how we have stored NAS data, it has been on single file systems that had local disk drives shared out over a single 10/100 mb network. As storage systems became more advanced, we saw people using clustering, snapshots, thin provision, de-duplication and replication to help keep our companies communicating. When we needed more throughput or more storage we added a server or added disks which created islands of unshared power.
If you look at 2009 and one of the hottest buzz words in the storage market, it was cloud computing. Having a large source of power in one area to pull resources from without having to provision new equipment. We also saw more and more clients looking at NAS protocols as the Ethernet could support faster speeds than traditional fibre channel. A huge amount of you have been looking at and moving your virtual environments to NFS to help cut down on administration overhead and to take advantage of the CNA technology.
With a higher demand for NAS technology, comes the burden of being able to scale at the same rate the storage, network and throughput increases. Older NAS systems allowed clients to increase the amount of storage but once you reach the critical mass the system allowed you had to purchase another clustered system. This creates multiple islands of storage pools that have be managed, provisioned and backed up. Not a great solution for companies that are growing and have fewer administrators to do the work.
Now, IBM has a product that allows our NAS clients to grow and scale as their companies grow. SONAS is a highly scable NAS that works like a cloud. The underlying technology, GPFS, is the same found in some of the fastest computers in the world. SONAS uses a method of scaling in both storage and throughput by adding storage pods (60 SATA or SAS disks) or interface modules (x3650 servers) like Lego blocks. All of this is managed by a central command module that allows a client to have full control over the entire system no matter how much storage or servers are in the system.
So the "Next big thing" in my opinion is here today and IBM is using the best of the best of IBM research for it's clients. The SONAS solution is designed from the ground up as a true blue NAS storage solution. Look for future SONAS blogs on GPFS, creating an ILM strategy and more.