By Dave Ridley, IBM UK & Ireland
Today, we're announcing what I consider the most exciting addition to the range of Flex System compute nodes - the IBM Flex System x222 Compute Node. This system is ideal for clients looking to virtualize their workloads and maximize their density of resources. By installing 14 of the x222 systems into a the IBM Flex System Enterprise Chassis, you get a staggering 28 individual servers in just 10U of space!
Looking at the front of the node you can see that there are two individual servers, each with power, USB and KVM access on the front as well as the two light path diagnostics panels on the side. Its quite an incredible dense design. The disks can either be two 1.8" solid state drives with a tray conversion or it ships standard with one 2.5" simple-swap SATA drive bay supporting SATA and SSD drives. There is also the option using USB memory keys inside each server for operating systems like VMware ESXi.
Before I get into the technical facts of this node, lets look at the unique design first. Its designed mechanically somewhat like an alligator head - in that the two individual servers comprising the x222 are located together on pegs at one end - the upper node then pivots down and closes like a clamshell onto the lower node, where it is locked into place.
Once you open it up, you'll see some really clever design work from the mechanical engineering team at IBM Research Triangle Park. For example the memory modules interleaved between each other, optimizing the maximum use of space within the node and ensuring even cooling. The memory modules rest on foam pads when the node is closed, providing both vibration protection and optimum cooling of the DIMMs. A total of 384GB can be installed - on each server.
The processors are from the Intel Xeon Processor E5-2400 family, which are up to eight cores. That gives a total of 448 cores in a single 10U chassis of x222 nodes! Each server (top and bottom) in the x222 has three memory channels and two DIMMs per channel for each processor, giving a total of 12 memory DIMMs per server. Memory supported is either RDIMMs or LRDIMMs.
Each server has two 10 Gb Ethernet ports with Embedded 10Gb Virtual Fabric Ethernet LOM controller based on the Emulex BE3 chipset. These ports support vNIC and can be upgraded to support iSCSI and FCoE if needed. One major thing to note is that usage of both ports on both system boards requires two scalable Ethernet switches in the chassis and each switch must be upgraded to enable 28 internal switch ports.
At the rear of the node is where the I/O adapter can be installed to provide additional fabric communication and as this is two servers in a single node, this I/O adapter is of a "mid-mezz" shared design. The available adapters are InfiniBand or Fibre Channel. The FC5024D 4-port 16Gb FC Adapter allows 2 channels of 16 Gb FC from each server, one connected to one switch in Bay 3 and one connected to the switch in Bay 4. The IB6132D 2-port FDR InfiniBand Adapter is a 2-port 56 Gbps adapter and allows one InfiniBand connection to each server.
There are some good diagrams that explain the I/O within the x222 and indeed every aspect of this density optimized sever and they can all be found within the product guide that resides on the Redbooks site in the form of the IBM Redbooks Product Guide of the x222 Compute Node. Why not take a look!
Dave Ridley is the PureFlex and Flex System Technical Product Manager for IBM in the United Kingdom and Ireland. His role includes product transition planning, supporting marketing events, press briefings, management of the UK loan pool, running early ship programs, and supporting the local sales and technical teams. He is based in Horsham in the United Kingdom, and has been working for IBM since 1998. In addition, he has been involved with IBM x86 products for some 27 years.
Dave is the co-author of the 3rd edition of IBM PureFlex System and IBM Flex System Products and Technology from IBM Redbooks.