SDN Takes Manhattan: IBM’s new live migration demo
From the Muppets to the Weeping Angels, many have tried to take Manhattan by storm in TV and film. This past month, IBM had an opportunity when we unveiled a new demo showcasing the power of software defined networking (SDN) and Network Function Virtualization (NFV) for both data centers and telecommunication providers.
This demo is the latest work from IBM’s collaboration with the New York State Center for Cloud Computing & Analytics at Marist College. Established earlier this year with a $3 M grant from the governor of New York, this center supports a wide range of projects which benefit businesses in New York State. One of these projects, sponsored in part by IBM’s university relations group and by other business partners including Adva Optical Networking, is the SDN / NFV test bed, located in the brand new Hancock Building on the Marist campus. This data center houses many different types of IBM servers, storage, and system networking devices (including IBM PureSystems and IBM G8264 switches), as well as optical wide area networking equipment from Adva (the FSP 3000 platform). This environment was used to develop a new, first of its kind deployment of SDN across multiple data centers separated by 125 km of optical fiber, in which the optical WAN equipment is also orchestrated by the same SDN controller. While current SDN standards only support Layer 2-3, there is an effort under way to standardize OpenFlow down to the physical layer, which is expected to make significant progress in 2014.
The Marist college faculty and students did an outstanding job writing several new applications for this environment, including an open source management graphical user interface called Avior, which controls OpenFlow devices from a mobile device such as a tablet or smart phone. Avior allows us to monitor network statistics, configure traffic flows, and administer firewall or load balancer properties, without requiring the network administrator to write complex Python code scripts. Avior incorporates a static flow pusher which can deploy pre-configured network profiles or schedule network configuration events for touch-free provisioning. A second application developed specifically for the Adva optical network (which the students have named Advalanche) is called by Avior to orchestrate the WDM equipment. We use the open source application Ganglia to monitor events such as server utilization or available memory; when a preset threshold is exceeded, Ganglia executes an action such as provisioning a network profile, migrating a VM, or cloning a VM. The SDN controller runs OpenFlow 1.0, and can be either the IBM PNC controller or an open source controller such as Floodlight (we plan to introduce Open Daylight into the test bed when it becomes publicly available this coming December).
In this demo, we demonstrated automatically triggering live VM migration (moving a VM while it continues to run uninterrupted) if the CPU utilization on the host server exceeded 75%. In this example, the VM was running a video streaming application, but in principle we could use any NFV application, such as a virtual firewall, router, or telecom function. Possible use cases for this function include multi-site load balancing, or cloud bursting between a private data center and a public cloud; this work may also have applications to the so-called “noisy neighbor” problem in public cloud deployments. Since network costs can be a significant fraction of the total cost for cloud computing services, this approach can also be used to provision dynamic workloads and improve utilization of the data networks.
When over-utilization of the server occurs, Ganglia triggers a VM migration. The SDN controller automatically provisions all the switches in the source and target data center, as well as wavelengths on the optical network (subject to available physical resources and workload priority levels). End-to-end network provisioning can be accomplished in about a minute, as compared with current approaches which can take days or weeks since they require manual intervention to configure both the data center network and the WAN. VMWare is then used to live migrate the VM from a server in the source data center to another server in the target data center (in the future we plan to use other hypervisors including KVM and PowerVM). As usual, migration time is a function of the size of the VM and the available network bandwidth; in our example, a 3 GByte VM live migrates in a few minutes (we have partnerships with other colleges in New York State which are developing models for VM migration time). Once migration is complete, the SDN controller reverts all the network devices to their original states, including the WAN.
This demo was presented for the first time at the Adva North America Technology Symposium, held just off Broadway in New York City on September 12. The response has been overwhelming; since we’ve taken Manhattan, we’ve gone on to showcase this work worldwide, ranging from major service providers who’ve come to visit Marist College to remove demos for interested people in Japan and Bratislava. If you happen to be near Frankfurt, Germany the week of October 15, stop by the SDN & OpenFlow World Congress to see Prof. Rob Cannistra from Marist presenting the demo (and don’t forget to visit the IBM and Adva presentations while you’re there).
Projects like this one demonstrate the disruptive effect SDN and NFV are having on the global market, and this is only the beginning. Now that we’ve started spreading the news about SDN in New York, I’ll keep you posted on our latest SDN research and other developments. You cal also read more about SDN and cloud computing in the recently released, 4th edition of my book, the Handbook of Fiber Optic Data Communication from Academic Press/Elsevier. If you know anyone who’d like to see this demo, or any companies / universities who would like to collaborate with the Marist lab, please drop me a line (email@example.com) or a message on Twitter (@Dr_Casimer).
Software Defined Networking and 100G are hot topics at symposium
Last week, I presented some of my work at the annual North America Technology Symposium sponsored by Adva Optical Networks, at the Millenium Broadway Hotel in New York City. Regular readers of my blog will recall that Adva is among the many companies who have publicly endorsed the Open Datacenter Interoperable Network (ODIN), which is IBM’s vision for next generation data networking. While the Adva symposium was admittedly focused mostly on their solutions, there were many interesting presentations of general interest. The presentations are available here, but Id like to review the highlights in this blog.
The symposium was a full day event, opening at 9 AM with a welcome from Adva’s CEO, Brian Protiva, followed by a series of invited speakers. After lunch, two breakout sessions were offered for either enterprise or carrier networking (I chose the enterprise track, which focused on 100G metro and 16G Fibre Channel SAN). The day wound up with a Q&A session and dinner.
First, one of the Vice-Presidents from Verizon discussed adoption of software defined networking to create what he called “service aware networks”. He cited several examples of how carriers and ISPs can leverage SDN to grow top line revenue and provide value added services. We heard many interesting factoids during the day, such as how Internet content consumptions, driven by video, will grow 10X by 2017. It was clear that commoditized hardware and centralized software control were creating an interesting value proposition in this market.
Later, my presentation dealt with using SDN to help manage the combination of exponential capacity growth and declining margins faced by many ISPs and cloud providers. In the past, networking was all about how quickly you could deploy, scale, and manage infrastructure to create value. Networking equipment was relatively feature-poor; a combination of low bandwidth, customer ASICs with low functionality and immature protocols in the data and control planes made it difficult to realize a higher value proposer. That’s all changing; modern Ethernet bandwidth is approaching that of a computer back plane, flatter 2 tier Clos or mesh topologies offer better performance, and protocols like TRILL and OpenFlow coax more value from merchant silicon. In this age of network affluence, users demand a higher quality of service, including bandwidth and latency guarantees, turnkey provisioning, and application aware network optimization. This leads to a new value proposition form SDN networks; virtualization of the data center network and beyond is the next big frontier. This value doesn’t’ come with out challenges though. Current OpenFlow is driving us back to a centralized management framework, and scaling is an issue. The benefits of a flow switched topology are clear, however, and include standardized, fine grain flow control, rapid application deployment, and end-to-end performance guarantees – real value that clients are willing to pay for.
From Michael Haley, IBM Distinguished Engineer, we learned more about how cloud computing is changing the world. Market dynamics are volatile; look t the world’s 10 largest companies from the year 2000,k and you’ll only recognize 2 of the same names on the list today. But the market potential is also growing; there will be over a trillion devices connected to the Internet next year. And smart enterprise CEOs are leveraging hybrid cloud in 60% of their installations (up from 33% just tow years ago). Cloud adoption is driven by workloads, and some analysts believe that a major section will be compute as a service, representing over 20% of a projected $55B market in 2014. Business analytics, social business, telecommunications, banking, mobile video, healthcare, and utilities dominate in different markets worldwide. In China, for example, strategic investments in “cloud cities” have been launched to support a major part of their current $4 Trillion, 5 year economic plan.
Supporting an earlier assertion about bandwidth growth, the enterprise breakout sessions included a live demonstration of 100G metro Ethernet and a discussion of the power, space, and cost for such solutions. Using a network test set and traffic source, Adva showed how they can transport 100G over metro distances with appliances about 1 U high that fir into a standard 19 inch equipment rack. Various cable configurations and latency measurements were also made during this presentation.
I’m sure by now you’ve got the general idea…Bandwidth is exploding in the metro area at lower cost than ever before, driven by new applications such as cloud computing and the promise of software-defined networking. Put on your shades, the future for metro optics looks bright indeed.
Are you looking forward to high bandwidth optical links in the metro cloud, or are you just blinded by the light? Drop me aline or send me a tweet @Dr_Casimer if you’d like to discuss more.
Talking Networking with System X & PureSystem Technical University
While I’ve been trying to enjoy the nice summer weather as much as anyone (even with teenagers, Disney World is simply awesome) the wheels of technology continue to push forward even during summer vacation. For example, IBM recently hosted the System X and PureSystems Technical University in San Francisco, California. With over 27 major sponsors and exhibitors ranging from Intel to QLogic, this was an event worth attending. As usual, my interest lies in all things related to data center networking, so I was pleased to see more content on IBM’s Storage Volume Controller (SVC) presented by one of our business partners, Brocade (although IBM invented SVC some time ago, Brocade was only recently qualified to support stretch clusters as part of this solution). Regular readers of my blog will recall that Brocade is among the endorsers for the Open Datacenter Interoperable Network (ODIN), and that the SVC Stretch Cluster solution was discussed previously when I presented at the IBM Storage Edge conference in June. I’d like to mention a few additional features of storage networking using SVC that didn’t make it into my earlier blog, and try to segue from Disney World to World Wide Port Names (let me know how you think this works out).
If you missed this event and would like to follow along, the presentation from Brocade can be accessed at the IBM Tech University site; once you’ve created a login, just search for presentation evr51. You can also catch up on this solution through the IBM storage roads show making its way around the country for the next month or so.
Multi-site storage deployments are useful for many applications. These include improved physical security, disaster avoidance/recovery, and increased uptime by moving workloads to different compute centers. The IBM SVC Stretch Cluster solution aligns your storage access needs with virtual machine mobility across extended distances. The actual distance depends on your latency requirements; since we can’t get around the speed of light limitations (yet), for typical applications IBM recommends 100 to 150 km or so, although the solution is qualified up to 300 km or more. SVC Stretch Clusters provides read/write access to storage volumes across multiple sites, and works in concert with Tivoli management products to insure synchronous data replication. Also, SVC supports SAN routing with industry standard FC-IP links for intercluster communications and volume mirroring within split cluster groups. The underlying IP infrastructure complies with ODIN best practices, and includes Brocade offerings such as the MLXe switch to provide line rate 1, 10, and 100 Gbit/s Layer 2 connectivity based on MPLS and VPLS/VLL.
Digging down into the technology a bit further, Brocade supports the IBM 16 Gbit/s Fibre Channel adapters used in System X solutions; both single and dual port options are available, running over 1,000,000 IOPS per adapter. These adapters support features including SAO (application quality of service assignment), target rate limiting, boot over SAN, boot LUN discovery, NPIV, and switched N_ports. The IBM Flex systems include embedded offerings such as a 24 port or 48 port scalable SAN switch, also running 16 Gbit/s links with over 500,000 IOPS per port. The SAN switches used in SVC provide additional buffer credits to support long distance connectivity (half a dozen ports running up to 250 km without performance droop, with negligible droop up to 300 km or longer). To reduce the number of fibers required between sites and save cost when connecting two remote locations, you can consolidate up to four lower data rate links into a single inter-switch link at 16 Gbit/s, and then logically combine up to eight ISLs into a single high performance frame-based trunk.
When using the Brocade Fibre Channel adapters in a fabric, it’s possible to eliminate fabric reconfiguration when adding or replacing servers. You can also reduce the need to modify zones and LUN masking, since you can pre-provision fabric ports with virtual worldwide port names ((WWPNs) and boot your LUN zones, fabric zones, and LUN masks. It’s easy to migrate virtual WWPNs within a switch, and map them to physical devices to help with asset management. Further, you can use diagnostic port features to non-intrusively verify that your ports, transceivers, and cables are in good working order, reducing the fabric deployment and diagnostic times from days to a few hours or less (depending on the size of your fabric).
If you’d prefer to connect multiple sites using wavelength multiplexing (such as the offerings from ODIN endorsers Adva, Ciena, or Huawei) you can run ISLs directly over a WDM network. I’ll have more to say about WDM solutions qualified by IBM in a future blog. For now, here’s a quick tip for configuring your Brocade switch fabric: if you want to run line rate 10 Gbit/s from the Brocade SAN switch directly over WDM, the first 8 ports on the FC16-32 or FC16-48 switches can be configured to operate at this data rate – you can save a slot in the DCX with this configuration. And remember that you can always logically partition the switches to isolate different traffic types, so you can connect storage resources in a PureFlex with a larger existing SAN that might be running your System Z FICON traffic, and keep the two applications isolated from each other.
Your SVC Stretch Cluster solution compliments the integrated compute power of PureFlex, and both of them can co-exist in your data center. All the PureFlex resources are managed from one point with Flex System Manager (FSM), and the use of open industry standard protocols mean that you’ll be getting the lowest possible hardware cost. Of course, you knew all that if you made it to PureSystems Technical University for your summer vacation, so you can get started saving money and improving storage performance right away. If you missed it, don’t worry…IBM will be offering more technical university events in the coming months, spread around the world, for not only PureSystems but many other brands as well. If you can attend, drop me a line & let me know how you liked it; I’ll keep everyone posted on the feedback through my blog & Twitter feed.
The Cloudy with a Chance of SDN World Tour 2013
“So what…I’m still a rock star…and guess what, I’m having more fun…” words to live by from everyone’s favorite role models, Pink
I had an epiphany this past month, when my manager congratulated me on becoming a rock star. At first I wasn’t quite sure what to make of this. Sure, my classic rock playlists for the Hudson Valley FIRST Lego League tournaments have earned plenty of compliments, but that’s just because I was lucky enough to be raised in the 80s when the best rock music ever was being written (who knew, right?)
As it turns out, he wasn’t referring to my excellent taste in music, just the huge amount of traveling I’ve been doing lately. There’s been tremendous interest in software-defined networking (SDN) this year, and I’ve been going on tour to educate & inform people at a series of events sponsored by IBM and others. You might have seen me in New York City earlier this year, when I presented a two-day session for industry & academic users of SDN at IBM’s office on Madison Avenue. I was also in Chicago this past May for a briefing sponsored by Network World. This summer, you can catch my act in a city near you; sign up at the links below to attend for free, or to check out events you may have missed:
June 10 – Poughkeepsie, NY (at the annual NSF Enterprise Computing Conference, held at the recently established New York State Center for Cloud Computing & Analytics, Marist College. All the conference presentations are now available online).
July 9 – Toronto, Ontario, Canada (at the IBM Markham Innovation Center), with my special opening act from the Marist College SDN Lab, to get the crowd warmed up
August 14-15 – Boston, Mass. (an IBM briefing in Waltham, where my colleagues from CUNY will be opening for me with a talk on teaching SDN, then a stop at the SHARE conference that week)
August 28 – Philadelphia, PA (an IBM event at the Valley Forge Casino, with more participation from Marist & CUNY). They’ll be camping out in line to get tickets for this one, so be sure to book early!
August 29 – Washington, D.C. (another IBM event with the Marist and CUNY teams)
September 12 – New York City (for the Adva North America Partner Symposium)
And just like a real rock star, I get to call this an international tour because we have one stop in Canada! Although to be fair, you might have caught a colleague of mine presenting some new research on the European leg of our tour, at the IEEE ICC conference in Budapest, Hungary this past June, where we presented some of our recent work on orchestrating networks within and between multiple data centers. All these dates make it hard to keep track, maybe I should get some T-shirts printed up…
If you can’t catch me on tour this summer, don’t worry; rock stars also do videos, and I’m not going to be an exception. OK, so it’s more of an interview where I describe the IBM ODIN network reference architecture and OpenDaylight, but I still hope you’ll find it interesting. Check out my video here; I’m not promising that it will sync up perfectly with Dark Side of the Moon if you play it backwards at half speed, but you never know.
Or, if you’re really ready to rock out hard core, dare to check out my latest article on innovation, with my co-authors from the Harvard School of Business, in the March issue of the Journal of Innovation Management, Policy & Practice. It’s totally off the chain.
Of course, I don’t have any rock star groupies yet…which is where you, my loyal readers, come in. After you check out my talks, papers, or videos, drop me a line on Twitter (@Dr_Casimer) & let me know what you think about SDN, or what you’d like to see covered in my next blog. And don’t stop believin’ in the power of SDN as you continue your journey towards open source, virtualized networks.
I’m on the Edge….of storage…
(and I’m hangin’ on a modem with you)
If you haven’t guessed from the blatant pop culture reference in the title of my blog , I spent the first week of June at the IBM Edge storage conference (and I promise if you keep reading that I’ll refrain from making any puns on the Edge theme – despite the temptation to bring up a favorite Irish rock band hero ). Anyway, it would hardly be appropriate to mention another band when Foreigner did such an awesome job rockin’ the conference. Who knew when I was growing up that the 80’s would produce the greatest rock ballads of all time ?
Anyway, it’s been a great week at IBM Edge, hearing about all the latest advances in storage technology; in case you missed the talk on SVC Stretch Clusters as an example of the ODIN reference architecture, let me say a few words about it here. This will get a bit technical, but don’t worry…we’re not going to have a quiz at the end.
The problem we’re trying to solve is VM mobility over extended distance, and multi-site workload deployment across data centers. VM mobility not only improves availability of your applications, it’s a more efficient way to use limited storage resources. The most common reason for using this approach typically involves some form of business continuity or disaster avoidance/recovery solution, including such planned events as migrating one data center to another or eliminating downtime due to scheduled maintenance. But given an increasingly global work force, there are other good reasons to explore VM mobility. Many clients are realizing that this approach provides load balancing and enhanced user performance across multiple time zones (the so-called “follow the sun” approach). Others are realizing that by moving workloads over distance, it’s possible to optimize the cost of power to run the data center; since the lowest cost electricity is available at night, this strategy is known as “follow the moon”.
IBM has announced a software bundle featuring Storage Volume Controller (SVC), which includes Stretch Clustering over long distance. This provides read/write access to storage volumes located far apart from each other, enabling data replication across multiple data centers. SVC works in concert with Tivoli Productivity Center (TPC) to manage your storage, and integration with VMWare’s products like VMotion and vCenter enables transparent migration of virtual machines and their corresponding data or applications.
Let’s consider two data centers separated by up to 300 km (supported in SVC 6.3), and interconnected by a traditional IP network such as the internet or by dark optical fiber. We require many of the features for an Open Datacenter with an Interoperable Network (ODIN) for this solution, including lossless Ethernet fabrics, automated port profile migration, Layer 2 VLANS in each location, and an intersite Layer 2 VLAN supporting MPLS/VPLS (preferably with a 10G or 100G Ethernet line speed between sites, since the SAN infrastructure is likely running either 8G or 16G Fibre Channel). An SVC split cluster uses industry standard Fibre Channel links for both node-to-node communication and for host access to SVC nodes, so your production sites must be connected by Fibre Channel links or FC-IP.
Generally a business continuity solution will define one physical location as a failure domain, though this can vary depending on what you’re trying to protect against; a failure domain could also consist of a group of floors in a single building, or just different power domains in the same data center. In order for SVC to decide which storage nodes survive if we lose a failure domain, the solution uses a quorum disk (a management disk that contains a reserved area used exclusively for system management). At a minimum, you should have one active quorum disk on a separate power grid in one of your failure domains; up to three quorum disks can be configured with SVC, though only one is active at any given time. Metro mirroring is recommended for this type of solution; a maximum round trip delay of 80 ms is supported (note that routing is required, since the fabrics at each location are not merged).
Connectivity between sites may take several forms. First, if the regular Internet provides sufficient quality of service (QoS) and meets your business objectives for recovery time, recovery point, etc., the IBM SVC uses industry standard protocols (FC-IP) in conjunction with a Brocade switch infrastructure to transport storage over distance. This is typically a low cost option, though you might require multiple circuits with load balancing (a so-called virtual trunk). Second, it’s possible to run a Brocade inter-switch link (ISL) between SVC nodes (with SVC 6.3.0 or higher). Brocade switches provide ISL options including consolidation of up to four ISLs at 4 Gbit/s each (creating a 16G trunk), or up to eight ISLs at 16 Gbit/s each (creating a 128G trunk). Buffer credit support for up to 250 km (nearly the SVC limit) is available. SVC supports SAN routing (including FC-IP links) for intercluster storage connections. Finally, note that you can connect multiple locations with optical fiber and use a variety of protocol-agnostic wavelength division multiplexing (WDM) products in this solution. This may provide better QoS or dedicated bandwidth for large applications. A 10G passive WDM option is available on some Brocade switches (with options such as in-flight compression and encryption), or a stand-alone WDM product can be employed (IBM has qualified many such solutions, including those from ODIN participants Adva, Ciena, and Huawei). Your local service provider may also offer a variety of managed service backup options using a combination of these features. Attachment of each SVC node to both local and remote SAN switches (without ISLs) is typically done in this case. Both the ISL and non-ISL approaches are known as split I/O groups.
IBM SVC storage manager works in concert with vCenter through API plug-ins. This includes VADP (which provides data protection for snapshot backups at the VMware-level rather than the LUN level, allowing you to concentrate on the value of the VM rather than the physical location of the associated data). Performance improvements can be achieved by offloading some functions to the storage hypervisor, as well. The storage hypervisor includes a virtualization platform, controller, and management (TPC supports application aware snapshots of your data through Flash Copy Manager). At the management level, IBM also allows the storage hypervisor components to be managed as plug-ins for vCenter. VM location can be managed through vCenter with Global Server Load Balancer (GSLB), which works in concert with a Brocade API plug-in. Further, vCenter is integrated with Brocade Application Resource Broker (ARB), which can report VM status back to a Brocade ADX switch. vCenter and GSLB manage both VM and IP profiles, performing intelligent load balancing to redirect traffic to the VM’s new location.
With this combination of ODIN best practices, IBM SVC, and Brocade SAN/FC-IP connectivity, your data can rest easy, wherever it happens to be (and so can you).
Want to know more about IBM/Brocade partnerships , or SVC Stretch Cluster solutions ? Drop me a line, or ask a question on my Twitter feed.
Top Ten Must-Reads on IBM Networking Strategy
There’s been so much going on in the world of data networking lately that I hardly know where to begin. It feels like I’ve been living on Internet Time this year (maybe you have, too); it’s hard to believe it’s already most of the way through first quarter. So, while I usually don’t take this approach, I thought that the fastest way to get everyone up to date on all the latest networking news would be to let you pick your favorites from the list of my recent presentations, podcasts, and webinars.
For starters, I recently got back from the Open Network Exchange meeting in New York City, sponsored by Network World magazine in mid-February. I gave a talk on how software-defined networking is being used as part of the ODIN network architecture, including some thoughts on finding a standard definition for SDN (something even Bob Metcalf hasn’t been able to do).
I also spoke about how SDN disrupts existing markets, reviewed IBM’s early client adopters & the benefits they have realized, and offered a few thoughts on what the future holds. You can see my presentation, plus others from the conference, at this site:
Of course, there’s still a lot of debate among different parts of the industry regarding what SDN really means. In particular, the datacom and telecom worlds have surprisingly different perspectives on this issue. I recently participated in a roundtable discussion on this topic, along with representatives from Cisco, Juniper, Huawei, Alcatel-Lucent, MRV, AT&T, Verizon, Orange, Ericsson, Rad Data Communications, and the ONF; you can listen to the discussion here In the future we plan more of these round table discussions, leading up to the 2013 MPLS/Ethernet World Congress, so keep watching as the debate continues.
I still feel that network virtualization is the next big
thing in our industry, and software-defined networking has become one of the
hottest topics since the creation of Ethernet 25 years ago (if your memory doesn’t
go back that far, read the first chapter in your CCNA qualification guidebook
to see how the world used to be made up of private networks from IBM, DEC,
Xerox, and others). While SDN is almost
certainly over-hyped right now, I believe it’s nearing the peak of the Gartner
Group hype cycle, as evidenced by some early adopters who have found high value
use cases for this technology. To hear
more, listen to my podcast with Lippis Group on SDN enablement of next generation
data centers, recorded December 2012
While you’re on the Lippis Group website, if you still haven’t read my blog or the IBM System Networking website articles about the Open Datacenter Interoperable Network (ODIN), download my podcast
on this topic to get up to date on how ODIN is being applied at large data centers worldwide, and how it will continue to reflect changes in the networking community throughout this coming year.
If you’re a regular reader of my blog and Twitter feed, then
you know that I’m passionate about open standards. In fact, if somebody tries to tell you they
have SDN working in their data center today, but it only runs on their
equipment, don’t believe them…SDN only works when it’s part of a larger,
standards-based data center strategy. If
you’d like to read about that larger strategy, and how it relates to big data,
analytics, and other workloads, there’s a nice, short introduction in the new
IBM RedPaper Point of View (PoV) article series. Sponsored in part by the IBM Academy of
Technology, these new Redpapers bring you all the key facts for a quick
tutorial on a subject, and refer you to the much longer Redbooks for a
step-by-step cookbook on how to make them work for you. Redpapers are available on a wide range of
topics; for data networking, start with my PoV on data networking IBM Redbooks PoV publication, #redp-4933-00 ,
Interested in storage area networking, or wondering how the SAN is going to change in the future? I've been working on that question with some of our industry partners, including ODIN-endorser and leading SAN authority Brocade, who have also recently been qualified by IBM for extended distance backup solutions using SAN Volume Controller (SVC). To see how SVC handles long distance Fibre Channel applications and integrates with VMWare management solutions, check out our recent presentation from IBM SHARE (session 12735) on avoiding the fog and smog that can come with cloud networks.
Late last year, the governor of New York State announced the creation of a new, $3M Center for Cloud Computing and Analytics, based at Marist College. IBM has funded an SDN research lab which is affiliated with this group, and which will also be taking advantage of Marist’s membership in the Internet 2 consortium (regular blog readers will also recall that Marist is the first academic institution to endorse ODIN). While this program is still in its early days, Marist has successfully built an SDN testbed using the Floodlight controller, made contributions to the Floodlight distro, released an open source SDN dashboard tool called Avior, and begun to prototype SDN in a mainframe enterprise environment. The college recently presented a 90 minute, sold-out presentation on their SDN work at the TIP 2013 conference in Hawaii; if you didn’t get a trip to this tropical paradise to hear them, you can still find their presentation and summaries of their recent work
Did I hear someone ask how Google is using optical
technologies to add value in their data center networks? (yes, I have the technology to hear you
through my blog page, but if I told you how it works I’d have to kill
you). Anyway, some of my colleagues at
Google recently weighed in on this topic for Laser Focus World, and I was
subsequently invited to present a
webinar based on their work (with a few of my own recent accomplishments thrown
in). You might not agree with everything
they have to say (after all, very few of us are running a data center with
Google’s requirements), but it’s always interesting to hear one of the biggest
network operators on the planet talking about optical technology. You can listen to an on-demand playback of my webinar, which cites the original Google article.
Finally, you may have noticed that one of the largest conferences in the field, the Optical Fiber Conference & National Fiber Optic Engineers Conference (OFC/NFOEC) has invited me to blog
about some of the hot topics in the industry leading up to the March 2013 OFC meeting. I’ve been doing this for a few months now, on a wide range of topics including low power optical interconnects, optics for cloud computing and SDN, WAN interconnects, and next generation data centers.
You can find out more about speakers on these and other related topics by visiting the OFC cloud/datacom landing page
Also, I’ll be doing a live daily blog from OFC starting
March 17, so be sure to check this site for regular updates during the
conference. Or you can stop by &
visit me in person, either during my presentation for the OIDA workshop on
metrics for aggregated networks or my tutorial on optical interconnects for datacom on Tuesday, March 19. I’ll also be stopping by the Elsevier booth
on the trade show floor to check on plans for the fourth edition of my book, the
Handbook of Fiber Optic Data Communication, coming out later this year (but
that’s another blog….)
As I’ve said before, this is a very interesting time to be an optical network engineer. I hope that some of these recent articles appeal to you, and if there’s another topic you’d like to see me cover, drop me a line or send me a tweet (@Dr_Casimer). And if anybody would like to get together at OFC/NFOEC in Anaheim, be sure to let me know!
Towards an Open Data Center with an Interoperable Network
Part II – What are we trying to fix?
Over the past several years, progressive data centers have undergone fundamental and profound architectural changes. Nowhere is this more apparent than in the data center network infrastructure. In this post, we’ll take a look at some of the problems with conventional networks, and next time we’ll introduce the fundamentals of an approach to deal with these issues.
Instead of under-utilized devices, multi-tier networks, and complex management environments, the modern data center is characterized by highly utilized servers running multiple VMs, flattened, lower latency networks, and automated, integrated management tools. Software defined network overlays are emerging which will greatly simplify the implementation of features such as dynamic workload provisioning, load balancing, redundant paths for high availability, and network reconfiguration. Cloud networks with multi-tenancy, resource pooling, and other features are becoming increasingly commonplace. Finally, to provide business continuity and backup/recovery of mission critical data, high bandwidth links between virtualized data center resources are extended across multiple data center locations.
Highly virtualized data centers offer greater resource utilization and lower costs. They can also simplify management if network issues such as latency, resilience, and multi-tenant support for public and private cloud environments are addressed. To realize the greatest benefits from virtualization, networks must be optimized to support high volumes of east-west traffic. This can be accomplished by flattening the network to a two-tier design, using Layer 2 domains to facilitate VM migration, and deploying network overlays to enable efficient virtual switches. While existing storage networks will likely continue in their present role for some time, the opportunity to converge networking and storage traffic is enabled by new lossless networking protocols that guarantee data frame delivery. Each of these exercises requires a non-trivial extension of the existing data network. Collectively, they present a daunting array of complex network infrastructure changes, with fundamental and far-reaching implications for the overall data center design.
The networking industry has responded to these changes with a bewildering array of standardized and proprietary solutions, making it difficult to determine the best course of action. IBM believes that the practical, cost-effective evolution of data networks must be based on open industry standards and end-to-end interoperability of multi-vendor solutions (for a few words on the importance of standards, see my last blog entry). That’s why IBM has recently published a series of technical briefs, endorsed by many industry leading companies, that lay out a path towards an open data center with an interoperable network (which we’ll call by its acronym ODIN….after the ruler of Asguard in ancient Norse mythology. Coincidentally, his symbol the valknut looks a bit like a 2 ties network topology).
Next time, we’ll give you an overview of the first series of ODIN documents and discuss why they’re important. Let me know the biggest problems in your network by responding to this post below, or for shorter problems on my Twitter feed.
Towards an Open Data Center with an Interoperable Network
Part III – What have we done so far?
In support of networking solutions for open, interoperable data center networks, IBM has taken the lead in creating a series of technical briefs known as ODIN (from my last blog post) which describe the issues facing these networks, and the standards which can be applied to address these issues (for the complete list of materials, see the IBM System Networking website). So what is ODIN, and why does it matter ?
First, we should clearly state that ODIN is NOT a new industry standard, and does not compete with existing standards. Rather, the ODIN documents explain and interpret existing standards and describes best practices for incorporating standards into a multi-vendor network. It can be used to guide strategic planning discussions, help prepare a vendor-agnostic request for proposal (RFP), and clarify preferred technologies to optimize each aspect of the network design. Taking advantage of this material can promote buying confidence by letting administrators choose among multiple networking vendors and avoid incompatible offerings that lock them into a nonstandard architecture
ODIN addresses best practices and interpretations of networking standards that are vital to efficient data center operations. These methods and standards facilitate the transition from discrete, special-purpose networks, each with its own management tool, to a converged, flattened network with a common set of management tools. They represent a proven approach that has been implemented by IBM and others within their own data centers using existing products, as well as through engagements with industry-leading clients worldwide. The first release of this material includes features such as:
Industry standards supporting VM migration and flat Layer 2 networks, including features to enable VM migration and support alternative Layer 2 and 3 designs
Lossless, converged enhanced Ethernet (including IEEE specifications for data center bridging)
Extended distance connectivity between multiple data centers leveraging MPLS/VPLS and protocol-agnostic optical wavelength division multiplexing, including special consideration for ultra-low latency network requirements
Emerging standards for overlay networks featuring software-defined networking and OpenFlow, as well as emerging network overlays such as distributed overlay virtual Ethernet (DOVE)
Additional features and standards will be added in the future. For now ODIN has been endorsed by many leading industry companies and universities, and more are expected to participate in the future. Drop me a line if you’d like to know how your company or university can participate.
What do you think about an industry standard approach to networking? Give me your feedback below, or send a quick response to my Twitter feed.
Towards an Open Datacenter with an Interoperable Network
Part I – Why Standardize ?
Standards have played a pivotal role throughout history. Just ask my 9th grade daughter. This story is going to sound like a digression, but bear with me…like most stories, there’s an important morale at the end that will save money for your IT organization.
This past week, my daughter learned that the economic unification of China between 247 – 221 BC was due, in part, to the standardization of weights and measures, including the length of ox cart axles (which facilitated transport of goods on the road systems ). The history of technology contains many examples like this one, showing how standards are beneficial. They promote buying confidence by helping to future-proof purchases (no need to worry that your new ox cart won’t fit on the roads). They encourage competition and commoditization, which lowers capital expense (if all the ox carts are the same size, then I can buy the lowest priced cart that fits my needs). And they promote innovation, interoperability and avoid confusion in the marketplace (does it matter if my ox cart is red or blue, as long as it fits on the road? Probably not. But if I can build a cart with the same axel width that can hold twice as much produce, then I’ve created meaningful innovation and differentiated myself from the other ox carts).
In the same way, a standardized approach to more modern commodities, like data center switches, makes sense too. Much has been written about how we can standardize on parts of the solution that have long development times, like silicon ASICs, and differentiate through those aspects which have faster turnaround, like software.
But what about the benefits of buying all your ox carts from the same place? Doesn’t this mean you can get lower prices from buying in bulk and having a good relationship with a single ox cart provider? Won’t you have to train your mechanics to only fix one kind of ox cart, using a common set of tools, and thus save training expenses? Surprisingly, the answer is no to all of these questions, according to a study conducted by Gartner Group for major networking equipment vendors at a large number of customer deployments. For example, this study found that working with a single vendor actually costs a premium of up to 20% over multi-sourced environments, since that vendor isn’t constrained by competitive pricing. Since the tools to fix different types of ox carts (and network switches) are mostly common regardless of brand, there isn’t a need to increase staff or training.
In fact, according to this study, CIOs who don’t re-evaluate their single vendor networking choices aren’t living up to their fiduciary responsibilities. So check out this report for more details, and next time I’ll tell you how to distinguish between true industry standard networking implementations, and those who just want to take you for a ride in their ox cart.
Questions about how networking standards can save you money? Ask me through either my blog or Twitter feed.