For more than a year, I have been talking to Cloud Application Architects at MSPs and Telcos around the world looking to develop super scalable storage-based applications to create new businesses for their organizations. They want to do everything in software and drop it on commodity hardware like they see Google and Facebook doing – but they don’t have the resources Google and Facebook have to develop everything from scratch.
Their requirements quickly coalesce around developing to an open API and targeting commodity hardware so they aren’t tied to a specific vendor’s API and can scale cost effectively. They quickly conclude that OpenStack is the way to go. See http://www.openstack.org/
And by and large they are happy developing to OpenStack’s compute API, Nova, as they feel comfortable working with open source code at the compute level, since they can architect-in resilience and robustness and any tradeoffs are worth it for the open API.
For them, that comfort level stops at OpenStack’s storage components, Cinder and Swift. While they love the OpenStack storage APIs, there is a much higher bar, set by them, as far as resiliency, robustness, reliability, etc. for storage, compared to compute. They feel they can lose a server or a VM, but they can’t risk losing data. What they said they wanted was risk-free, tested, resilient, reliable, robust, industrial strength storage – done in software, on commodity hardware – with OpenStack storage APIs. And for the past couple of years, there was no solution for them, until now.
The latest release of OpenStack, Havana, includes a Cinder driver (Cinder is OpenStack’s block storage) for IBM General Parallel File System (GPFS) - giving architects building public, private, and hybrid clouds access to the features and capabilities of the industry’s leading enterprise scale-out software defined storage.
And the Cinder driver is just the beginning. IBM’s vision for GPFS for OpenStack is to create a single scale-out data plane for the entire data center ‒ or multiple connected data centers worldwide. GPFS will unify VM images, block devices, objects, and files within a single name space no matter where data resides, with software defined storage functions like de-clustered GPFS Native RAID (GNR) and policy-based data placement in the best location, on the best tier (performance & cost), at the right time – all in software – on heterogeneous, commodity, industry-standard hardware. This is shown in the very consequential figure below.
GPFS provides a common storage plane that can be fully integrated with OpenStack Cinder, Swift, and POSIX control planes
The ability to use industrial strength GPFS to manage volumes (Cinder), objects (Swift), images (Glance), shared filesystems (Manila), and use file clones to efficiently and quickly share data within and between components, along with POSIX interfaces like NFS for integrating legacy applications, will be a boon for MSP and Telco Cloud-scale application developers deploying on OpenStack. So take a look at GPFS Software Defined Storage for OpenStack, and stay tuned to our progress.