Why did I ask this question? Because the processor, memory, and I/O subsystem are well balanced and virtually not considered as performance bottlenecks in the majority of Intel Xeon processor-based server systems. The major source of performance issues in many application environments (including transactional databases, business analytics and virtualization) is related to the storage I/O activity because of the speed of traditional storage systems that still does not match the processing capabilities of the servers. This disparity can lead to a situation where a powerful processor sits idle waiting for the storage I/O requests to complete. This situation wastes the processor’s time, which negatively impacts user productivity, extends return on investments (ROI) time frame, and increases overall total cost of ownership (TCO).
With the virtualization trends in data centers, servers demand higher I/O bandwidth to match the capabilities of Intel Xeon E5 and E7 families of multi-core processors and increased amounts of memory, allowing the higher number of virtual machines (VMs) to be hosted on a single physical system. Higher I/O bandwidth, including storage I/O, can help to achieve better server utilization and higher VM per server ratio.
But if storage I/O is so critical, what type of the storage connectivity should I use?
In the recent past, converged networks were a popular discussion topic because of their ability to carry both LAN and SAN traffic over the same physical infrastructure. Depending on infrastructure requirements, this can help to reduce the number of ports, adapters, and devices used, and therefore decrease overall TCO. However, there is no “one size fits all” approach, and the selection of the most suitable storage connectivity can be a difficult task. Both dedicated SAN and converged networks have their own strengths and benefits, and the final choice depends on what are the most important requirements in each particular deployment.
I think that in general, FC SANs can potentially provide better performance, availability, scalability, and security, while converged networks can potentially achieve lower TCO. If the storage workload is light to medium and TCO is the key decision factor, a converged network is an appropriate choice. If the storage workload is moderate to heavy and performance, availability, scalability, and security are the key decision factors, FC SAN is an appropriate choice.
Let's consider an IBM Flex System example that uses 16 Gb Fibre Channel connectivity. IBM Flex System 16 Gb FC solution can provide the speed of storage fabric of up to 1.6 GBps per port per direction. Combined with the reliable high-speed solid-state drive technology and storage tiering, 16 Gb FC fabric can help to significantly decrease storage I/O response time to match the processing power of the server CPUs.
IBM Flex System integrated 16 Gb FC technology can help to achieve (compared to 8 Gb FC):
- Up to 30-50% better virtual machine density and higher number of concurrent users due to increased storage bandwidth
- Up to 20-40% fewer servers required to support the workload specified
- Twice lower number and higher speed of inter-switch links required in scalable SANs
- Higher reliability and availability of services due to fewer number of components used to build the solution
- Twice faster access to the business critical data
- Lower acquisition costs due to fewer number of systems and components
- Shorten ROI time frame and decrease overall TCO with the efficient utilization of server resources and lower power, cooling, and management costs
If you want to learn more, I suggest that you'll read the following recently published IBM Redpaper: Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. It provides some examples of typical areas to look at where 16 Gb FC would be an appropriate choice and discusses 16 Gb FC performance results obtained in IBM ATS lab.