IBM NeXtScale System is a new offering from IBM just announced this week. This new dense offering is comprised of a simple energy-efficient and space-efficient enclosure that holds 12 servers in 6U of rack space. I introduced the new server in an yesterday's blog post.
I was in Taipei for the past two weeks working with a team of subject matter experts from around the world (Barcelona, Dubai, Vancouver, and Denver to be precise) on an upcoming Redbook on the new system. We got to work with the development team (great people, BTW) here in Taipei and took a lot of photos of the system along the way.
So for you hardware junkies out there, here’s what the IBM NeXtScale System hardware looks like. Click the photos to see larger versions.
First, the NeXtScale n1200 Enclosure. This chassis is a 6U unit and holds up to 12 servers. In the photo below is the front of the chassis where the servers are installed. You’ll notice that the servers have all the ports at the front of the server? As someone said here in Taipei, “The front is the new back!”.
In this photo, all servers are 1U high and half the chassis wide. In development are add-on trays just like with iDataPlex. These trays add 1U of height and either give you extra disk space (7x 3.5-inch bays) or two extra PCIe slots for PCIe cards such as GPUs and coprocessors.
The back of the chassis is where the fans and power supplies are. The chassis has 10 fans and 6 power supplies and all are standard.
The panel on the right is just a label with a screwdriver attached to it. The screwdriver is a T8 that is used to remove the drive bays from inside the compute nodes (you probably won’t ever need to use it, however).
The unit in the middle is the Fan and Power Controller (FPC). The photo below is a closeup - the device on the right is the FPC and the label on the left just covers a spacer. It looks like the AMM in BladeCenter but this isn’t a full management module. We’re simplifying here - in the NeXtScale chassis, all the FPC does is manage the power and cooling. The Ethernet port lets you view status and set parameters related to power (power policies and capping for example) and cooling (acoustic mode for example). Both a web interface and an IPMI command interface are provided.
Here’s the FPC removed from the chassis. The FPC has a USB port where the event log is saved and the power and cooling configuration is saved. In the unlikely event that the FPC fails and needs replacing, you can transfer this key to the replacement FPC and all data is transferred over.
Here’s the compute node we just announced, the NeXtScale nx360 M4. Three Ethernet ports on the front – two are for the operating system and 1 is dedicated to the IMM2 management module. Next to Ethernet ports is a KVM port for local console access if you need it. The unit has two adapter slots, one standard PCIe slot and one mezzanine card slot. On the left are the power button and LEDs.
Here’s a closeup of the front. We put a 10Gb Ethernet card with holes for SFP+ optics or cables in the mezz slot. And in case you were wondering, yes silver is the new black .
This is what you use to connect to the KVM connector – the KVM dongle. It provides video, serial and 2 USB for mouse and keyboard. You get one KVM dongle with the chassis. (BTW, this is the same dongle as used with Flex System).
Inside the server, here’s the top view. The front is on the left. Click the image to see a larger version.
The server has two Intel Xeon E5-2600 v2 processors. These are the processors Intel just announced, codenamed “Ivy Bridge”. Each processor supports 4 DIMMs. Our research showed that roughly 90% of our iDataPlex customers only used 8 DIMM slots so this was one of the ways we reduced the price of the system – by providing a no-frills server.
On the left are the spaces for the mezz card (top) and PCIe card bottom. Here’s a closeup of the PCIe riser card that gives you a PCIe 3.0 x8 slot for a RAID card or networking card or HBA. The riser actually plugs into a PCIe 3.0 x24 slot – enough lanes for future use for connection to an I/O tray.
Here’s a mezz card installed.
At the back is the space for the drives. You can have one of the following for internal drives:
1x 3.5-inch HDD
2x 2.5-inch HDD
4x 1.8-inch SSD
Here’s the 3.5-inch drive bay. The drive just slots in and mates with the drive connector.
Here’s what the 2x 2.5-inch drives looks like. You pull the little blue button on the side of the cage out and then lift up the drive bays. On the other side you can then pull a lever which pops out the drives. There are springs at the back that push the drives out (not strong enough to launch the drives though – I tried!)
Tucked away near the DIMMs is an internal USB port. This is for a USB memory key with VMware ESXi installed on it.
Finally, at the back of the server to the left of the drive bays is an extra PCIe 3.0 x16. This slot isn’t used in the current offering but will be with future I/O trays.
So there you have it. It’s a no-frills system but you get the latest Xeon processors from Intel in a cost effective and dense package. With 12-core processors, you can have up to a total of 2,016 cores per rack!
For more information on the IBM NeXtScale System and the nx360 M4 server and n1200 enclosure, see the IBM Redbooks Product Guide and keep an eye out for the IBM NeXtScale System Planning and Implementation Guide from IBM Redbooks too!
David Watts is an IBM Redbooks Project Leader. He writes books and papers on many areas related to IBM Flex System, IBM BladeCenter and IBM System x. Follow David on Twitter at @DavidAtRedbooks.