IBM NAS Storage
In answer to your requests for IBM N series demos, Andrew Grimes will be delivering a demo on Thursday, March 11th. This Introduction to IBM N series will be followed by a brief and informative demonstration of how N series delivers storage efficiency with disaster recovery solutions. This is your opportunity to demonstrate N series features and ease-of-use to your customers and prospects, plus get some assistance in closing business this quarter. All attendees who fill out the post-event survey will be entered into a drawing for a free Apple iPod.
WHEN: Thursday, March 11th, 10-11:30am CST.
PRESENTED BY: Andrew Grimes
Click here to REGISTER TODAY!
The topics that will be discussed during this N series presentation are:
1. Simplifying Data Management
2. Storage Efficiency
3. Protecting mission critical business applications (Oracle, Exchange, SQL, VMware & SAP) better than our competitors
4. Most importantly, see how we recover these applications in a matter of minutes!
IBM Storwize® V7000 Unified stores up to five times more unstructured data in the same space with integrated Real-time Compression
Richard Swain 060000VQ8G email@example.com 777 Visits
IBM Storwize® V7000 Unified stores up to five times more unstructured data in the same space with integrated Real-time Compression
Today IBM announced the enhancement of compressing not only block data on the V7000 but also now it includes the file data on the V7000 Unified. The V7000 was first set up with compression back in the summer with a big announcement surrounding “Smarter Storage”. This optimization was the same code and engine that was purchased from a company named Storwize a few years ago.
IBM initially kept the compression appliance that Storwize was first known for in the market. Using LZ compression with a RACE (Random Access Compression Engine) providing an optimized real-time compression without performance degradation. Thus slowing down data growth and reducing the amount of storage to be managed, powered and cooled.
The compression does not require the compression or decompression of entire files to access the data block. The engine will compress and decompress the relevant data blocks “on the fly”. As data is written the RACE engine compresses the data into a smaller chunk and its 100% transparent for systems, storage and applications.
The V7000 Unified can now deliver a larger compressed platform than any other mid-range platform. With compression percentages around 75%, a system that was maxed out at 2.8 PB (960 drives x 3TB each) can now see the system handle up to 5 PB of storage.
Each V7000 Unified with code base 6.4 has the option of turning on a 45 day trial of the compression software. After setting the license to “45” then you can add new compressed volumes on the system. You can also compress data on virtualized storage arrays.
Compression has been part of NAS for a very long time. We have seen compression of files from jpeg to office documents. But the best part is the end user will never have to worry about which files needed to zipped or compressed. Everything that comes through the V700 Unified can be compressed in line before it writes the data to disks.
A couple of other improvements that IBM announced were the addition of a integrated LDAP server to V7000 Unified. This now allows customers to use both local authentication and external authentication servers to allow access to data. Another feature was the ability to upgrade a V7000 to a V7000 Unified in the field. If you currently own a V7000 but need to add file access to the system, IBM will sell you the two file modules and corresponding software to upgrade you system. Now mind you there is a list of requirements that will need to be met so check with your local storage engineer for more information. And finally we now have support for a 4 way cluster on V7000 unified. This allows for more disks to be provisioned and can compete with some of the other mid-range storage platforms in the market.
This all together makes a nice round of improvements that will make life easier for IBM customers. As the V7000 platform matures it looks like IBM is putting their money where their mouth is and making storage smarter and more efficient. More to come on this platform as I suspect we will see bigger things down the road.
Do you expect more out of your storage? IBM thinks you should and is putting their money where their mouth is. In the past it has gone under different names like STG University and Storage Symposium, but now IBM has revamped its premier storage conference. The big announcement came today with much fanfare that included a new website, some videos and bunch of hype on twitter. A three part conference for executives, gear heads and business partners there is something for everyone. But what will be different tham years in the past? I think IBM looked around how other vendors use conferences to help pump up its customer base (VMWorld, EMCwhatever) and decided to put some hype in the conference.
Think of this as a great place to go and network, learn and have a good time. The conference will be in Orlando and there will be tons of time to sit in class rooms and learn about the latest technologies but there will be sessions where IBM will be pulling in our top execs and analysts to tell you where IBM is going in the storage world.
The Executive Edge will feature different speakers from Jeff Jonas, Aviad Offer and IT Finance expert Calvin Braunstein. This track will take executives through new announcements, deep dives on technical platforms, one on one sessions with IBM Execs and some great entertainment. This is a new feature of the conference as in the past it was more geared towards the technical teams.
Of course the Executive Edge will be limited so talk to your local storage sales person to get a chance to be a part of this special event. There will be time to bring in your team and have special sessions and round tables with the IBM engineers who can help you find your way down this path of crazy storage growth. And there is a golf course on site which I have heard is very nice. Bring your clubs or rent them, I am sure there will be plenty of us out there so find a partner and have a good time.
More importantly IBM is making the effort to step up the event and have it on par with the other IBM conferences like Pulse. The technical portion will have over 250 sessions on storage related topics. You will also get road-map information from the product teams as well as a chance to become a certified technician. One area that has been expanding is our hands on labs and this year we will have the biggest one yet. You will be able to come in to the labs and actually see our storage systems and have a chance to 'test drive' them.
Early bird registration is open now
and you can sign up today. The conference will be in sunny Orlando
Florida at the Waldorf Astoria and Hilton Orlando at Bonnet Creek. The
event starts on June 4th and runs to the 8th. You can follow the
conference on twitter @IBMEdge and use the hashtag #ibmedge For the conference website go here
I look forward to seeing you in June.
My father is a retired teacher but loves to work with his hands. I can remember very early on in my up bringing, him teaching me that it is good to measure twice and cut once. Whether it was building a deck or just a bird house the point was it took more time to cut something wrong and then has to re-cut the board shorter or even wastes the old board and cut a whole new one.
When I was preparing for this article I remember having to learn that lesson the hard way and how much effort really is put into that second cut. The problem in the storage industry is the misaligned partitions from a move of a 512 byte sector to a new 4096 byte sector. This has to be one of the bigger performance issues with virtualized systems and new storage.
Disk drives in the past had a limit on the number of sectors to 512 bytes. This was ok when you had a 315 MB drive because the number of 512 byte blocks was not nearly as large as what is in a 3 TB drive of today’s’ systems. Newer versions of Windows and Linux will transfer the 4096 data block that match the native hard disk drive sector size. But during migrations even new systems can have an issue.
There is also something called 512 byte sector emulation which is where a 4k sector on the hard disk is remapped to 8 512 byte sectors. Each read and write would be done in eight 512 byte sectors.
When the older OS is created or migrated, it may or may not align the first block in the eight block group with the beginning of the 4k sector. This causes misalignment of a one block segment. As the reads and writes are laid down on the disks the misalignment of the logical sectors from the physical sectors mean the 8 512 byte blocks now occupy 2 4k sectors.
This now forces the disk to perform an additional read and/or write to two physical 4k sectors. It has been documented that sector misalignment can cause a reduction in write performance of at least 30% for a 7200 RPM hard drive.
This issue is only magnified when adding other file systems on top of this misalignment. When using a hyper visor like VMWare or Hyper-V, the virtual image can be misaligned and cause even further performance degradation.
There are hundreds of articles and blogs written on how to check for you disk alignment. A simple Google search of the words “disk sector alignment” and you will find this has been a very popular topic. Different applications will have different ways of checking and possibly realigning the sectors.
One application that can help you identify and fix these is a tool called the Pargon Alignment tool. This tool is easy to use and will automatically determine if a drive’s partitions are misaligned. If there is misalignment the utility then properly realigns the existing partitions including boot partitions to the 4k sector boundaries.
I came across this tool when looking for something to help N series customers who have misalignment issues in virtual systems. One of the biggest things I saw as an advantage was this tool can align partitions while the OS is running and does not require the snapshots to be removed. It also can align multiple VMDKs within a single virtual machine.
For more information on this tool and alignment check out the Paragon Software Group website.
In the end, your alignment will effect how much disk space you have, how much you can dedupe and the overall performance of your storage system. It pays to check this before you start having issues and if you are already seeing problems I hope this can help.
Richard Swain 060000VQ8G firstname.lastname@example.org Tags:  sonas gateway nseries ibmstorage ibmnas 827 Visits
For the last six years IBM has been selling the N series gateway and it has been a great tool to add file based protocols to traditional block storage. A gateway takes luns from the SAN storage and overlays its own operating system. One of the 'gotchas' with the gateway is the storage has to be net new, meaning it can not take an existing lun that has data and present that to another device.
Richard Swain 060000VQ8G email@example.com Tags:  netapp ibm nseries ibmvsnetapp fas ibmnas nas #ibmstorage 1,264 Visits
Top 10 Reasons clients choose to go with IBM N series
Some years ago I put together a list of reasons why people choose to buy from IBM rather than purchase directly from Netapp. IBM has an OEM agreement with Netapp and rebrands the FAS and V-series as their N series product line. They are both made at the same plant and the only difference between them is the front bezel. You can even take a Netapp bezel off and stick it on an N series box and it fits exactly.
The Software is the same exactly. All we change is the logos and readme files. The entire functionality of the product is exactly the same. IBM does not add or take away any of the features built into the systems. The only difference is it takes IBM about 90 days once Netapp releases a product to get it put online and change the necessary documents.
Support for N series is done both at IBM and Netapp. Much like our other OEM partners, they stand behind IBM as the developers and IBM handles the issues. Customers still call the same 1.800.IBM.SERV for support and speak to trained engineers who have been working on N series equipment for 6+ years now. IBM actually has lower turn over than Netapp in their support division and has won awards for providing top-notch support. The call home features that most people are used to still go to Netapp via IBM servers.
10. The IBM customer engineer (CE) that is working with you today will be the same person who helps you with the IBM N series system.
9. IBM GBS team can provide consultation, installation and even administration of your environment.
8. IBM is able to provide financing for clients.
7. When you purchase your N series system from IBM, you can bundle it with servers, switches, other storage and software. This gives you one bill, one place to go to if you need anything and one support number to call.
6. IBM has two other support offerings to help our clients, Our Supportline offering allows customers to call in and ask installation or configuration questions. We also have an Enhanced Technical Support (ETS) team that will assign a personal engineer that will know everything about your environment and will provide you with everything you need. They will help you with health checks to be sure the system is running optimally, updates on the latest technology and single point of contact in case you need to speak to someone immediately.
5. IBM N series warranty support is done by IBM technicians and engineers at Level 1 and Level 2. If your issue can not be resolved by our Level 2 team they have a hotline into the Netapp Top Enterprise Account team. This is a team only a few very large Netapp accounts can afford and we provide this support to ALL of the IBM N series accounts no matter how large or small.
4. Our support teams from different platforms (X series, Power, TSM, DS, XiV, etc) all interact with another and when tough issues come up we are able to scale to the size of the issue. We can bring in experts that know the SAN, Storage, Servers, and Software all under one umbrella. With those tough cases we assign a coordinator to make sure the client does not have to call all of these resources themselves. This person can reach out to all the teams, assigns duties and will coordinate calls with you the customer.
3. All IBM N series Hardware and Software undergoes an Open Source Committee who validates there are no violations, copy right infringements or patent infringements.
2. All IBM N series Hardware and Software is tested in our Tucson testing facility for interoperability. We have a team of distinguished engineers who not only support N series but other hardware and software platforms within in the IBM portfolio.
1. All IBM N series equipment comes with a standard 3 year warranty for both Hardware and Software. This warranty can be extended beyond the three years as IBM supports equipment well beyond the normal 3-5 years of a system.
When it gets down to it, customers buy because they happy. Since the systems are exactly the same it comes down to what makes them happy. For some, the Netapp offering makes them happy because they like their sales engineer, for others they like IBM because they have been doing business with us for over 30 years.
For more information about IBM N series, check out our landing page on http://www-03.ibm.com/systems/storage/network/
Richard Swain 060000VQ8G firstname.lastname@example.org Tags:  redbook storage nas ibmnas ibm vsphere vmware 1,011 Visits
Now available is the IBM System Storage N series with VMware vSphere
Redbooks are a great way of learning a new technology or a reference for configuration. I have used them for years not just in storage but for X series servers and for software like TSM. The people that write the books spend a great deal of time putting them together and I believe most of them are written by volunteers.
This is the third edition of this Redbook and if you have read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1 environments
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction of both the N series systems and VMware vSphere. There are sections on installing the systems, deploying the LUNs and recovery. After going through this Redbook, you will have a better understanding of a complete and protected VMware system. If you need help with how to size your hardware there is a section for you. If you are looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making sure you have proper alignment between the system block and the storage array. This will negatively impact the system by a factor of 2 in most random reads/writes as two blocks will be required for one request. To avoid this costly mistake or to correct VMs you have already setup a section in the book called Partition alignment walks you through the entire process of correctly setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of deduplication, compression and cloning to drive the efficiency of the storage higher. These software features allow customers to store more systems on the storage array than if they used traditional hard drives. Also there is how to use snapshots for cloning, mirrors for Site Recovery Manager and long term storage aka Snapvaults. At the end of the book are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene, there is a great guide that will help you from start to finish setting up your vSphere environment. The information is there, use the search feature or sit down on a Friday with a high-lighter, which ever fits your style and learn a little about using a N series system with VMware.
Here is the link to this Rebook:
I just read the blogs from Chris Mellor from the Register and Tom Trainer Network Computing and thought how insightful are these two outsiders about the inner workings of IBM.
First off, yes IBM is no longer selling the DCS9900, a DDN OEM rebranded system in the very large IBM storage portfolio. There is no question that this product is no longer available after the October 15 date.
Second, the DCS 3700 is already part of our portfolio and is now an OEM box from Netapp/Engenio/LSI. The density of this system is the same as the DCS 9900 and makes sense to use the DCS 3700 as a replacement for the DCS9900.
Third, Tom’s blog about SONAS being a monolithic NAS storage is very skewed. SONAS is a very flexible in the way we can scale both storage and the throughput with out affecting either variable. Most “scale out” systems you have to scale both in order to keep up with demand. SONAS uses some of the best technology on the market with a huge amount of throughput.
His statement about IBM dropping DDN from SONAS is un-true and goes to show how much research Tom put into writing this blog. I am sure Tom is looking out to write a non-biased blog for Network Computing but maybe those days at HDS are still making a big influence in his ability to look at announcement letter and make a extrapolations about other products.
Finally, If HDS thought BlueArc was so great, why didn’t they buy them back when they could have gotten the company for a better deal? Has the product changed THAT much since 2006? I wish HDS only the best for dealing with the transition and getting that product under the HDS umbrella.
If you do your homework and base your assumptions on facts instead of conjecture, you will find SONAS is a solid platform in the enterprise NAS market. SONAS has proven it can be the market leader with a low cost to performance ratio and will only get better as time goes on.
Richard Swain 060000VQ8G email@example.com Tags:  compression rtc nas storage ibm pkzip race 892 Visits
Labor day has come and gone and so has all of the holidays between now and Thanksgiving. This is only augmented with the hope that your favorite football team (both American football and what we call Soccer) has a great weekend match and you get to celebrate with the beverage of your choice.
During your work-week, which can and sometimes does include weekends, all you hear is no more money to do the things you have to do to keep the business running. If you have kept up with squeezing more out your systems with virtualization that’s great but your network is now overtaxed. The staff that used to take care of certain aspects of the day to day running of your data center has been let go and their job has been ‘given’ to you with no thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the price of gas is so high that you decide to bike to work to help save the planet. You spend more time on the road commuting and look like you need a shower when you get to work after dodging traffic all morning. Your coffee is priced higher now because the coffee house wants to use Fair Trade coffee from farmers in a county you have never been. And your dog is on anti-depressing meds because you are not home as much and he can’t go out in the yard because of the killer bees migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice when we find things that not only help us but are easy to use. When you come across these items they make such an impression that you like to tell others about your great fortunes. I came by a solution that was very easy to use and the value was so great that at first I didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and all had wonderful things to say about the technology. I listened but was hesitant to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other analyst, they all point to compression of the data as being one of the larger items for handling future growth. There are a lot of vendors that claim they can compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I understood how deduplication works and where it was useful. But this was compression which is a different ball game. Now we are able to shrink the storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the performance numbers were going to be and found an astonishing outcome. The RTC appliance made a performance improvement on the overall solution. It does help by adding cache and adding processing to the serving of data but it also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no compression, then all of the data has to be laid out on the disk, that sping for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB of data. But if we get 2:1 or 3:1 compression ratios, then all of the components have to work less. No longer are they working to save 100GB of data but 50GB or 25GB or data. This allows the system to process more data and have cycles to respond quicker to I/O requests (IE lower latency).
So the final thing is always the question of how hard is this to install. Is there a period of time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy. So easy that there is a good YouTube video that goes through the entire process of unpacking to racking to compressing data. I think the video speaks for itself:
Richard Swain 060000VQ8G firstname.lastname@example.org Tags:  object-based public cloud nas future unified 896 Visits
Last week at the IBM Technical Conference I was able to spend some time with a couple of friends discussing technology. It is always interesting to hear their take on where the storage market is going and what lays ahead in the future. As my Netapp pal and I were chatting about the messaging around unified architecture, we both noted that unified to one perceptive is disjointed to another.
IBM and Netapp have been using the term unified for its NAS/SAN device for about 5 years now. The idea is to share a common code base on the same hardware to increase functionality and usability of that storage. Other vendors have gone similar routes using multiple code bases and/or hardware but I see that as a NAS gateway in front of SAN storage system.
This has been very successful in data centers both large and small. But the idea of how we manage storage is changing. Virtualization is changing the idea of how and even where our data may be stored. The term cloud is something of a marketing term but I like the term Storage Utility better. Utility companies such as electric, water, sewer and even cable provide a product to its consumers and storage utility vendors could do the same.
Most people are not concerned about process companies take to make water drinkable or how electricity is generated as long as it is safe, reliable and easy for them to consume. Storage as a Utility is no different, it is only when the storage is offline or hacked in by outsiders the consumers are concerned. There are laws that govern utilities and the FTC has put some privacy laws together to help consumers but I believe we can take it a little further (a blog for another time).
As our data is changing from traditional spinning drives in our data center to a storage utility, we will need some type of bridge that will ease the pain of transition. The main reason people do not adapt new technology is because the transition is often too painful and the benefit of new technology is less than the need to move. Whether it is a software package that helps move data or a hardware device, it will have to give access to both file based data and object based data. This will allow for users to read the files as needed no matter what their connectivity or location. It could also be used to help drive efficiencies up buy allowing data to move from file based (high cost) to object based (lower cost) environments.
Today there are some vendors who have early versions of this type of unified solution. They are bridging the gap between what we have today in private data centers and the future of public utility storage. This is very early in the transition but with this type of technology, we will be able to adapt and provide a better way of storing data. Will it still be called a unified solution? Only the marketing people can tell us that.
Richard Swain 060000VQ8G email@example.com Tags:  rtc nseries ibmtechconfs #ibmtechu storage ibm sonas 1,230 Visits
Every year IBM puts on a conference for all of our clients, business partners and strategic partners. This conference has both Storage and X series sessions along with key note speakers from the top management at IBM. People come from all over the world to this conference looking for the 'how to' answers and whats to come with the product lines. There is also a solution center that houses all of the products along with our sponsors. This year our top platinum sponsors are Cisco, Intel and Netapp. Other sponsors include Brocade, Emulex, Fuision-IO, VMWare, Red Hat and SUSE. I plan to be working in the solution center at the SONAS booth talking to clients about the benefits of SONAS and how it fits into their environments. If you are wanting to stop by here are the hours that I will be there:
Monday, July 18th
Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks)
Tuesday, July 19th
Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM
Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
Solution Center Open 5:30 PM – 7:30 PM ( w/Networking Drinks)
Wednesday, July 20th
Sponsor / Exhibitor Only Lunch 11:15 AM – 11:45 AM
Solution Center Open 11:45 AM – 1:00 PM (Coffee & Dessert served in the Solution Center)
I also will be presenting a few sessions on NAS technology here at the conference. Most of my sessions will be a look at what IBM is doing with SONAS, N series and Real Time Compression. I have a NAS 101 class that I really love doing because there are so many people that have a misconception of what NAS is today. In my N series update session we will be talking about the latest release on N6270 and the EXP 3500 as well as a peek at the R23 release coming in a few weeks. The other two sessions I am doing are a little off the topic of NAS but around social media and using www.ibm.com for help. Tony Pearson, John Sing and Ian Wright will be joining me on a panel to discuss the roles we play in social media and what each of us thinks of the future of social media. The support session is something that a client suggested to me out of their frustration of how to find documents on our support pages. Here is a list of sessions and times that I will be presenting:
7/18 - 1:00 sSN14 Storage Networking (NAS - SAN) NAS 101: An Introduction to Network Attached Storage
7/19 - 10:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
7/19 - 1:00 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/20 - 1:00 sGE10 General Tips and Tricks on Searching for Support Answers on ibm.com
7/20 - 5:30 sGE61 General Using Social Media in System Storage
7/21 - 10:30 sSN18 Storage Networking (NAS - SAN) IBM N series: What's New?
7/21 - 2:30 sSN15 Storage Networking (NAS - SAN) NAS @ IBM
If you are at the conference feel free to come to any of my sessions and I would love to hear from you about the IBMNAS blog or any of my social media outlets. We are using the following hashtag for the conference all week if you want to follow what is going on via twitter feed #ibmtechu
I was just thinking the other day that I really need to
write an article for my blog about the upcoming releases. When I opened the page it said I had not
written anything since May of this year. Time really flies when you are having fun, so
IBM just released a new XiV system dubbed the Gen 3. Generation 1 of course was built by the XiV company
before IBM purchased them, then came Gen2 shortly there after. As you expect the system has to keep up with
customer demands and technology refreshes but some thing very unique caught my
eye. The performance with this system
will be heads and shoulders above the competition.
Nehalem micro-architecture now makes up the heart
of the processing power within the grid with tons more cache to boot. There is a change in the inter-connectivity
from Ethernet to Infiband. I can’t wait
to see the new SPC2 numbers when they are published.
I suspect with
the introduction of more cache (via SSD) and the switch over to near-line SAS
drives is only going to help increase performance from gen2 to a gen3 system. The
self tuning/healing, tierless storage is still at the heart of the system and still
redefines how storage is done today.
There are plenty of blogs and articles on the specifics of the release but here is the IBM announcement page http://www-03.ibm.com/systems/storage/news/announcement/big-data-20110712.html
Richard Swain 060000VQ8G firstname.lastname@example.org Tags:  santa nseries xiv ibm storage svc sonas v7000 r1.2 nas gpfs x3650 rtc 1,037 Visits
May 9th has been a target on my calendar for some time now. Inside of IBM, we have been waiting for this day to come so we could talk about the new things being released in the storage platform. It almost feels like Christmas morning with a bunch of new presents under the tree. Each gift has inside something that is either really cool or something very useful. The only difference is your Aunt Matilda and her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the entire storage platform. I will concentrate on just the IBM NAS ones but if you are interested in knowing what is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of gifts for him under the tree this morning. Not only did he find presents under the tree but there were a few little things in his stocking. Here is what Santa brought:
This SONAS release is labeled R1.2 and can be obtained by contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few gifts. A new N6270 to replace the N6070. This new system is in line with the N6200 series with larger amounts of RAM and processors. Just like the smaller N6240, there is an expansion controller where customers can add more PCI control cards like HBAs, 10GbE or even FCoE. A new disk shelf was also released which uses the smaller 2.5 inch drives with improved back end performance.
And over at the Real Time Compression house they got new support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as these were just a fraction of the Storage announcements today. Also today is the IBM Storage Executive Summit in New York City. My friend and fellow blogger Tony Pearson is covering this great event and will be updating his blog and twitter feed. If you were not able to make it to NYC for the event, feel free to tweet him your questions @az990tony You can also send questions to our IBM Storage feed at @ibmstorage
Richard Swain 060000VQ8G email@example.com Tags:  rtc nas nseries sonas storage nfs cifs ibm 844 Visits
I had the pleasure to present at the IBM Technical conference (aka STG-U) this past week. I was asked to speak about NAS technology basics and how the world is moving to more and more NAS platforms. Typically I get to present on some type of product, SONAS, NSeries, and the like. This was very much different as I got the chance to go deeper into the technology with out talking too much about products. The session name I used was NAS 101: An Introduction into NAS technology. The idea was to help educate our technical teams about the history of NAS, how NAS works, some pitfalls and then NAS at IBM.
There is so much surrounding NAS and to boil all of that down to a 1 hr 15min presentation is pretty difficult. The other challenge is trying to keep the information relative to the amount of knowledge everyone has in the session. I had people who were very skilled storage engineers to people who just getting into the business. I hope the information I presented was relative at all levels.
I wanted to post my slide deck here so if you have a need or want me to come and help teach what NAS is all about feel free to contact me.
Just checking in from the IBM Booth as the Oracle Collaborate 2011 kicks off. There is lots of interest in the Watson Kiosk and people milling around each of the stands. If you are here at the show stop by and ask for the storage guys.