Talking Networking with System X & PureSystem Technical University
While I’ve been trying to enjoy the nice summer weather as much as anyone (even with teenagers, Disney World is simply awesome) the wheels of technology continue to push forward even during summer vacation. For example, IBM recently hosted the System X and PureSystems Technical University in San Francisco, California. With over 27 major sponsors and exhibitors ranging from Intel to QLogic, this was an event worth attending. As usual, my interest lies in all things related to data center networking, so I was pleased to see more content on IBM’s Storage Volume Controller (SVC) presented by one of our business partners, Brocade (although IBM invented SVC some time ago, Brocade was only recently qualified to support stretch clusters as part of this solution). Regular readers of my blog will recall that Brocade is among the endorsers for the Open Datacenter Interoperable Network (ODIN), and that the SVC Stretch Cluster solution was discussed previously when I presented at the IBM Storage Edge conference in June. I’d like to mention a few additional features of storage networking using SVC that didn’t make it into my earlier blog, and try to segue from Disney World to World Wide Port Names (let me know how you think this works out).
If you missed this event and would like to follow along, the presentation from Brocade can be accessed at the IBM Tech University site; once you’ve created a login, just search for presentation evr51. You can also catch up on this solution through the IBM storage roads show making its way around the country for the next month or so.
Multi-site storage deployments are useful for many applications. These include improved physical security, disaster avoidance/recovery, and increased uptime by moving workloads to different compute centers. The IBM SVC Stretch Cluster solution aligns your storage access needs with virtual machine mobility across extended distances. The actual distance depends on your latency requirements; since we can’t get around the speed of light limitations (yet), for typical applications IBM recommends 100 to 150 km or so, although the solution is qualified up to 300 km or more. SVC Stretch Clusters provides read/write access to storage volumes across multiple sites, and works in concert with Tivoli management products to insure synchronous data replication. Also, SVC supports SAN routing with industry standard FC-IP links for intercluster communications and volume mirroring within split cluster groups. The underlying IP infrastructure complies with ODIN best practices, and includes Brocade offerings such as the MLXe switch to provide line rate 1, 10, and 100 Gbit/s Layer 2 connectivity based on MPLS and VPLS/VLL.
Digging down into the technology a bit further, Brocade supports the IBM 16 Gbit/s Fibre Channel adapters used in System X solutions; both single and dual port options are available, running over 1,000,000 IOPS per adapter. These adapters support features including SAO (application quality of service assignment), target rate limiting, boot over SAN, boot LUN discovery, NPIV, and switched N_ports. The IBM Flex systems include embedded offerings such as a 24 port or 48 port scalable SAN switch, also running 16 Gbit/s links with over 500,000 IOPS per port. The SAN switches used in SVC provide additional buffer credits to support long distance connectivity (half a dozen ports running up to 250 km without performance droop, with negligible droop up to 300 km or longer). To reduce the number of fibers required between sites and save cost when connecting two remote locations, you can consolidate up to four lower data rate links into a single inter-switch link at 16 Gbit/s, and then logically combine up to eight ISLs into a single high performance frame-based trunk.
When using the Brocade Fibre Channel adapters in a fabric, it’s possible to eliminate fabric reconfiguration when adding or replacing servers. You can also reduce the need to modify zones and LUN masking, since you can pre-provision fabric ports with virtual worldwide port names ((WWPNs) and boot your LUN zones, fabric zones, and LUN masks. It’s easy to migrate virtual WWPNs within a switch, and map them to physical devices to help with asset management. Further, you can use diagnostic port features to non-intrusively verify that your ports, transceivers, and cables are in good working order, reducing the fabric deployment and diagnostic times from days to a few hours or less (depending on the size of your fabric).
If you’d prefer to connect multiple sites using wavelength multiplexing (such as the offerings from ODIN endorsers Adva, Ciena, or Huawei) you can run ISLs directly over a WDM network. I’ll have more to say about WDM solutions qualified by IBM in a future blog. For now, here’s a quick tip for configuring your Brocade switch fabric: if you want to run line rate 10 Gbit/s from the Brocade SAN switch directly over WDM, the first 8 ports on the FC16-32 or FC16-48 switches can be configured to operate at this data rate – you can save a slot in the DCX with this configuration. And remember that you can always logically partition the switches to isolate different traffic types, so you can connect storage resources in a PureFlex with a larger existing SAN that might be running your System Z FICON traffic, and keep the two applications isolated from each other.
Your SVC Stretch Cluster solution compliments the integrated compute power of PureFlex, and both of them can co-exist in your data center. All the PureFlex resources are managed from one point with Flex System Manager (FSM), and the use of open industry standard protocols mean that you’ll be getting the lowest possible hardware cost. Of course, you knew all that if you made it to PureSystems Technical University for your summer vacation, so you can get started saving money and improving storage performance right away. If you missed it, don’t worry…IBM will be offering more technical university events in the coming months, spread around the world, for not only PureSystems but many other brands as well. If you can attend, drop me a line & let me know how you liked it; I’ll keep everyone posted on the feedback through my blog & Twitter feed.