IBM System x and Network Optimization
Blog by Dave Willoughby, IBM Senior Technical Staff Member,
System x-based Workload-optimized Solutions
We are finding great ways to elevate the role of Ethernet on System x-based solutions. You may have heard about our recent additions to the IBM PureSystems family called the IBM PureData System. The PureData System is optimized for delivering data services to support transactional or analytics applications. With the System x-based PureData System for Transactions, we created an enterprise-level, tier-1 IBM DB2 pureScale database solution that uses 10 Gb Ethernet at its core.
In case you haven’t heard of DB2 pureScale, it’s a DB2 version designed for organizations that run OLTP applications on distributed systems, So basically, it offers built-in clustering technology for high availability and exceptional scalability, which is transparent to applications. With pureScale, DB2 spreads the database across several "member" servers, and database consistency is maintained through member coordination with a "coupling facility" located on a central server. The role of the coupling facility is to ensure each member server has the latest copy of data. For DB2 pureScale to scale effectively, the communication protocol between members and the coupling facility must be very low latency, and the network between members and the coupling facility must be able to handle a very large number of relatively small transactions and a very large number of IOPS.
Not too long ago, I led a workgroup that studied both quad data rate (QDR) InfiniBand (40 Gb) and 10 Gb Ethernet. We found that for DB2 pureScale’s large IOPS requirement, 10 Gb Ethernet performance was on par with that of QDR InfiniBand throughout the range of DB2 client workloads that we tested.
With DB2 pureScale running its high IOPS RDMA traffic over 10 Gb Ethernet, in addition to meeting performance requirements, we noticed several side benefits in the PureData System for Transactions design. The same 10Gb Ethernet fabric could also accommodate the normal TCP-based Ethernet traffic the solution required — on the same set of adapters and switches. This removes the need for a separate Ethernet network for TCP-based traffic. And because the more common Ethernet protocol is used, clients don’t need InfiniBand skills and infrastructure. To me, the implication of these results is very exciting.
So to sum it up, fabric convergence is an important strategic direction as Ethernet is now capable of carrying high-speed RDMA messaging protocols as well as Fibre Channel storage protocols. In our studies, we found that DB2 pureScale with 10 Gb Ethernet meets performance requirements, provides consolidation of multiple traffic types onto a single set of equipment, and allows clients to use the Ethernet technology that they have invested in and that they are already familiar with. This means clients no longer have to acquire and maintain separate fabrics dedicated to individual traffic classes, but rather gain the operational efficiencies of multiple traffic classes on one fabric type.
With the announcement of the IBM PureData System, System x has demonstrated that enterprise-class database functionality can be achieved with only 10 Gb Ethernet equipment. And while there are many advantages for clients in the broad availability and acceptance of Ethernet, IBM also benefits by having multiple vendors to work with in creating Ethernet-based designs.
Connect with Dave on LinkedIn