Adva endorses ODIN
Following IBM's announcement this past week at InterOp, there has been a surge of interest in the recently proposed Open Datacenter Interoperable Network (ODIN) technical briefs. I'm pleased to report that Adva Optical Networking, a leading wavelength division multiplexing (WDM) company specializing in WAN transport, has endorsed the ODIN approach on their blog. There's never been a better opportunity for cloud data centers to get in on the ground floor by designing their next generation networks around the best practices and open industry standards referenced in ODIN. We appreciate Adva's support of this direction, and look forward to working with leading industry networking companies to implement the ODIN design recommendations in modern data center networks.
BigSwitch adds their endorsement to ODIN
Following the recent release of the Open Datacenter Interoperable Network (ODIN) technical briefs at InterOp 2012, several companies have publicly pledged their support for the ODIN approach to open standards. Most recently, BigSwitch Networks has posted to their blog with a nice summary of recent open standards activities at InterOp, including their endorsement of the ODIN technical briefs. IBM deeply appreciates this show of support for open standards in the data center network, including the full breadth of software defined networking (SDN) approaches (both overlay networks and OpenFlow). IBM has demonstrated the industry's first 40G OpenFlow enabled switch, and continues to drive strong innovation in SDN and other aspects of the ODIN design approach. Keep watching this blog for more news on ODIN and InterOp 2012, or follow my Twitter feed.
ODIN Approach endorsed by Brocade
I’m pleased to report that Brocade has publicly endorsed the open data center interoperable network (ODIN) approach to designing data center networks. On May 8, IBM released a set of technical briefs describing ODIN during the InterOp conference in Las Vegas. This approach to using industry standards as the preferred means to designing data center networks is discussed further in Brocade's blog. Many thanks to Brocade for their support of open networking standards; I’m sure we’ll have more to say about how to build these solutions in the near future.
I’m pleased to report that BTI has become the latest company
to publicly endorse the Open Datacenter Interoperable Network (ODIN) approach
to designing data center networks. As regular readers of my blog know, IBM has released
a set of technical briefs describing ODIN, which provides an approach to using
open industry standards to create next generation data center networks. I’ve written, podcasted, and been interviewed
many times about ODIN, all of which is linked from my blog. This approach to using industry standards as
the preferred means to designing data center networks has been endorsed in this post from Chandra Pandey, Vice-President of Platform Solutions at BTI. Many thanks for this support of open
networking standards; I’m sure we’ll have more to say about how to create these
solutions with IBM and BTI technology in the near future.
Ciena endorses ODIN
The list of companies endorsing IBM's recently announced Open Datacenter with an Interoperable Network (ODIN) continues to grow. Ciena is the most recent company to endorse ODIN, as noted in their blog post from their CTO and Senior Vice-President, Products and Technology, Steve Alexander. In this post, Ciena says that ODIN "looks to be a nearly ideal approach to allow the connect, compute, and store resources to be virtualized and operationally united for simplicity and scale". In fact, the use of industry standards to enable more tightly integrated solutions has been recently demonstrated in IBM's PureSystems offerings, which were announced on April 11; you can read more about PureSystems in my earlier blog posts. I'm very pleased that Ciena has endorsed the ODIN approach, and I'm sure we'll see more examples of this design approach in the coming months. Remember, let me know what you think about ODIN by commenting on this blog, or on my Twitter feed, and keep watching this site for the latest data center networking news.
Daylight Finally Breaks Through the Clouds
“To my surprise and my delight…the clouds burst to show daylight” - Coldplay
This past month, a lot of people have been asking me to comment on the rumors swirling around a possible IBM open source initiative for software-defined networking (SDN) called Daylight. And I mean a LOT of people, from the audience at the OFC/NFOEC conference, to the Wall Street banks who attended my talk at the Open Network Exchange in Manhattan, to my fellow networking engineers who participated in the online roundtable from the MPLS/Ethernet World Congress. Since I usually enjoy a good technical discussion, it’s been more than a bit frustrating that I couldn’t respond to these rumors directly before now. Waiting for the official announcement of this initiative was made even harder when I read some of the preliminary online speculation about Daylight that was either misinformed, misguided, or just plain wrong.
In any case, it was a tremendous relief for me when The Linux Foundation (the industry’s leading nonprofit organization dedicated to open source development) announced OpenDaylight this morning. So for everyone who’s been waiting along with me, let’s take this opportunity to clear the air about what OpenDaylight actually means for the data networking industry.
As I mentioned in my recent tutorial at OFC/NFOEC, major industry trends such as warehouse scale data centers, big data analytics, and cloud computing in the enterprise are driving companies to revisit their data center network designs. SDN has the potential to lower capital and operating expenses, increase efficiencies, and provide faster time to value in this rapidly changing environment. But in order to fully realize the potential value of SDN, as quickly as possible, we need to go beyond the product line of any one networking company. We need to create a development community that encourages rapid innovation from a broad range of stake holders serving a common goal. In short, we need to do for networking what Linux did for server operating systems.
OpenDaylight is an open source framework intended to accelerate adoption of Software Defined Networking and to create an open, transparent approach to SDN development. Just as the Linux community created a viable open source operating system, which matured until it was deployed on enterprise-class systems and mainframes, OpenDaylight will create programmable SDN abstractions for many different types of data networks. Much of the initial code will be contributed and supported by industry leading companies who have signed up as either Platinum or Gold members of the OpenDaylight project. I feel this is one of the strongest features of the project; OpenDaylight is not owned by any one company, although many industry leaders have committed both developers and funding to the effort. Besides IBM, founding members include BigSwitch Networks, Brocade, Cisco, Citrix, Ericsson, Juniper, Microsoft, NEC, Red Hat, and VMWare; other contributors include Arista, Fujitsu, Alcatel-Lucent, Intel, Dell, Hewlett Packard, Nuage, and Plumgrid. I’ve been working with many other IBMers for the past few months, talking with these companies and crafting a common perspective for the SDN open source community. You’ll note that a number of these companies had previously publicly endorsed IBM’s point of view regarding open networking standards (the Open Datacenter Interoperable Network, or ODIN). One of the five initial ODIN volumes deals with SDN and its implications for the data center; by definition, supporting OpenDaylight means supporting open networking standards, so it’s nice to see other companies joining this commitment to interoperable SDN networks.
This project is good news for anyone who’s been trying to implement the recent Gartner Group study, which effectively said that corporations who didn’t pursue a multi-vendor networking strategy were paying 15-25% more than necessary for their network, and thus failing to meet their fiduciary responsibilities. But SDN is about much more than just reducing your network operating expenses; it’s also a driving force for new applications and potentially new revenue streams. As I’ve said in my blog on many occasions, open standards and open source software are an excellent way to foster innovation. By supporting OpenFlow and other standards, OpenDaylight allows a global development community to innovate at the speed of software, just as we’ve seen for smart phones or tablet computers.
The first code from the OpenDaylight Project is expected to be available in 3Q this year, and will include an open controller, virtual overlay network, protocol plug-ins, and switch device enhancements. The code is independent of the network operating system, and is governed by best practices such as the Eclipse Public License (EPL) commonly used for Java. Just as in any open source community, companies are free to participate based on the merit of their contributions. For example, IBM plans to contribute an open source version of its Distributed Overlay Virtual Ethernet (DOVE) technology, which has been working its way through the IETF standards bodies for some time now. DOVE software runs on top of the existing network hardware infrastructure and virtualizes layer 2 and 3 network properties. This makes it possible to set up, manage, and scale virtual networks much faster than ever before. Some possible applications of DOVE include merging multiple data networks together (for example, when one company acquires another) or allowing highly virtualized servers to connect with merchant silicon switches (by abstracting the IP and MAC address tables).
I’ve said this before, but it bears repeating…it's a very cool time to be a networking engineer. I’m excited by the potential of OpenDaylight for cloud computing and other applications, and I’m looking forward to working with the global development community on bringing all the benefits of Linux and server virtualization to the data center network.
Still not sure how you feel about open source networking? Now that I can talk freely about OpenDaylight, drop me a line on Twitter @Dr_Casimer
From the Abstract to the Concrete: 5 Reasons why SDN makes a difference
I’m very pleased that the IBM Smarter Computing website has agreed to host my latest blog on why SDN makes a difference for your business. Check it out and let me know what you think, or drop me a line on Twitter (@Dr_Casimer).
From Wall Street to Watson – The 2013 Mark Luchinsky Memorial Lecture
This past week, I was honored to be invited to present the 2013 Mark Luchinsky Memorial Lecture at Penn State, and to meet with some of the faculty and students, including the Schreyer Honors College. This talk was presented as part of the lead-up for the 2014 Shaping the Future Summit. I enjoyed the opportunity to talk with students about technology issues, and since we didn’t have time to take everyone’s questions during the event, I’d like to use this blog to respond to several questions that came in through Twitter during & after the lecture.
Question: “When it comes to ethics in innovation, how do you balance when something created for good can be used for evil ? “
I see you’ve been reading Google’s list of 10 things they know are true (not doing evil comes in at number 6). While I personally like their technologist version of the Hippocratic Oath for engineers (first, do no harm), there are often large gray areas in practice (for example, should Google refrain from deploying their technology in nations which censor the Internet as a protest against that activity, or should they engage in these nations and try to improve things from within the system?). These are not easy questions. For some techniques on dealing with gray area thinking, I’d recommend the Presidential Leadership Academy at Penn State; I had the opportunity to meet with these students during my recent visit, and was quite impressed by their level of engagement and understanding. As a technologist, we have an ethical responsibility towards the technologies which we help create and deploy. However, often we can’t foresee exactly how a given technology will be used, or misused, in practice (there’s a famous story that AT&T originally didn’t want to file a patent application on the laser, because they couldn’t understand what it had to do with telephones). I’d encourage you to always act according to your values, and I try to approach my leadership role in the same way. For more on values-based leadership, you might consider the book “Leadership in my rear view mirror” by former IBMer Jack Beach.
Question: “ If technology is so important, why isn’t there a bigger push for it in grade school ? “
In some places, there is a big push (particularly in countries outside the U.S.). In many other places, more needs to be done in order to update the curriculum and reflect the recent, growing importance of technology (for example, some states have firm requirements for high school graduation in subjects such as health education and gym, but no requirements for basic technical literacy). There are many reasons for this. Technology advances much faster than most elementary and middle schools can keep up. Changing the curriculum statewide requires significant effort (though programs like IBM’s sponsorship of P-TECH are making a good start, as recognized recently by President Obama). There are always financial, political, and other barriers to overcome in the real world. There are many schools experimenting with a broader STEM education curriculum, and international FIRST Lego League is part of that effort. There is also a growing recognition that the rules of higher education are changing as well, including recent efforts to develop very large online classes (MOOCs) delivered at very low cost. I think it’s important to provide students with an understanding of technology as a life skill. Even if they don’t plan to become engineers when they grow up, they will still need to vote on issues affected by technology, science, and math, including whether to believe the statistics from the latest election polls, whether they want to eat meat that’s been irradiated as a preservative, or whether we should invest more in manned or unmanned space exploration.
Question: “ What role does disaster recovery play in modern data centers ? “
There’s a lot of interest in disaster recovery, following the events of Sept. 11, 2001, and the subsequent array of natural and man-made disasters which have threatened major global data centers. I work on enterprise-class disaster recovery for Fortune 500 clients using IBM’s System Z platform, and I developed our qualification program for optical networking companies who want to have their solutions tested under these demanding conditions. Although most companies recognize the need for both disaster backup and business continuity, sometimes we need to be reminded of the business impacts which can result if these solutions are neglected or not tested and updated frequently. Large data centers need to carefully consider how much data they can afford to lose, how fast they need key functions to come back online after a disaster, and other factors so they can appropriately size their recovery strategy. I’m currently working on ways to use software defined networking (SDN) to rapidly re-provision long distance optical networks from wireless mobile controllers (such as a smart phone), so that you can move critical data around while you’re evacuating from a disaster site. We have a demo of this running at the SDN Innovation Lab in the NY State Center for Cloud Computing & Analytics, Marist College, NY, for those who are interested. You can also read more about disaster recovery in the latest edition of my book (insert shameless self-interested plug here), the Handbook of Fiber Optic Data Communication (4th edition, Elsevier/Academic Press).
It’s always a pleasure to talk with students about the future of technology. If you’d like to continue the conversation, drop me a line on Twitter @Dr_Casimer
Huawei says more about ODIN
As noted in a recent post on this blog, Huawei had included a mention of the Open Datacenter Interoperable Network (ODIN) in their InterOp Webinar on open standards for cloud networking. In addition, Huawei has now posted a more detailed endorsement of ODIN on their blog site. According to this site, " ODIN addresses best practices and interpretations of networking standards that are vital to efficient data center operation". For those of you who haven't reviewed the ODIN materials yet, they include a description of the transformation taking place in modern data center networks and how to best address these issues using open industry standards. Keep watching this space for more news on ODIN and other data center networking issues.
Huawei mentions ODIN during InterOp webinar
During a webinar presented at InterOp 2012 describing how to prepare your infrastructure for the cloud using open standards, Huawei has indicated their support for the Open Datacenter Interoperable Network (ODIN) approach. Huawei joins a growing number of companies who recognize that the best path forward for next generation data centers lies in the use of open industry standards and protocols. You can read more about the importance of open standards and ODIN in my earlier blog posts or through my Twitter feed. Stay tuned for the latest news from InterOp and the world of data center networking !
IBM PureSystems: It’s all about the network - Part I
Every few years, IBM announces some major innovation in the way computers are designed, used or deployed. You might remember the transition from CMOS to BiCMOS mainframes, copper-doped ASICs, or open source Linux for the enterprise. Each of these represented a major shift in the way we think about and use computational power to accomplish a huge variety of tasks. Recently, IBM announced its latest innovation, the PureSystems platform of integrated servers, storage and networking.
By now, you’ve probably seen at least some information about how PureSystems accelerates cloud deployments, simplifies the data center, and consolidates computing resources. But, I’m a networking guy, so my view of the world is a bit different. Much like the famous view of the world as seen from New York City , when I look at PureSystems, I see a lot of advanced servers, storage, and software hanging off the true technological marvel – the integrated data center network.
At the risk of appearing a bit single-minded, I’d like to talk about one of the unsung heroes of the PureSystems revolution, namely the networking technology that ties PureSystems together. And then I’d like to point out that not only is the network a key part of PureSystems, it’s got the potential to drive the next series of big innovations on this platform, and maybe even across the computing industry.
Let’s start with a quick review of the PureSystems network.
First, it’s designed for flexibility; you can choose a combination of networking protocols, including Fibre Channel (up to 16 Gbit/second), Ethernet (10 to 40 Gbit/second), or InfiniBand (QDR and FDR data rates). You can plug up to four switches into a PureSystems chassis, and link multiple chassis together using the 10, 40 or 10/40GbE IBM System Networking RackSwitch top of rack (TOR) switches. This lets you scale PureSystems from a single chassis, up through multi-rack systems (where a rack can hold up to 4 chassis).
PureSystems also supports a virtual Ethernet switch running in the hypervisor, the IBM Distributed Switch 5000v. IBM’s virtual switches, blade switches, and TORs all support industry standards including switch-resident IBM VMready with IEEE 802.1Qbg to enable VM migration (either between VMs on the same physical server, or across multiple physical servers).
And, this platform makes really good use of server virtualization; each chassis can hold up to 14 half-wide blade servers or 7 full-wide blade servers, running your choice of workloads on Linux, Windows, or AIX. Yes, I said AIX…you can plug either IBM Power microprocessor blades or Intel x86 blades into a PureSystems chassis. With around 160 servers in a 4 rack system, even a moderately virtualized system can fit over 1,600 VMs quite comfortably. That’s a tremendous amount of compute power in a relatively small package, and it comes pre-integrated with a single system manager that lets you manage all the physical and virtual resources in the system (without any third party tools).
Now that we know a bit about the networking technology inside PureSystems, why should we get excited about it ? Tune in to Part II of my blog to find out! Meanwhile, let me know what you think about the importance of networking for integrated systems by commenting on my blog, or through my Twitter feed.
IBM PureSystems: It’s all about the network
Part II (Electric Boogaloo)
In Part I of this post, we looked more closely at the networking under the covers of an IBM PureScale system. We found that a reasonably configured PureSystems solution could comfortably support a whole lot of VMs in the space of only a few racks (no, I’m not going to repeat the numbers here; check out my last post for more details). I also promised to explain why networking would drive the next big innovations on this platform.
This dense packing of compute power is exactly why the network will be so important to the future of this system. Before PureSystems, large amounts of servers and storage would have to be spread out across the data center; network latency and physical distance would ultimately limit performance. Now that multi-core processors, advanced storage technology, and other features have made it possible to fit this much processing power into a few racks, we can take full advantage of Ethernet running up to 40 Gbit/s and Fibre Channel running up to 16 Gbit/s to realize very high bandwidth and low latency over short distances.
Now, imagine what happens in a few years as these trends continue. When the network can run 100 Gbit/second or faster, it becomes the highest speed interconnect on the platform. We’ll be able to interconnect more processors (each of which will also be more powerful than they are today and will host more VMs), with negligible performance impact due to the network. Multi-processor systems on the order of several thousand physical processors could become economically viable for many users, not just the most advanced applications.
At the same time, storage is integral to PureSystems, not a separate add-on from another company. In the future, server to storage access technologies previously reserved only for high performance computing can begin to trickle down into more commercial integrated platforms. And future integrated systems, enabled by the network, could then reach levels of parallelism and performance far beyond what we know today; think of how video games have brought the equivalent of a graphic supercomputer into your living room at very low cost. With latency between servers and storage becoming a non-issue, these systems would be ideal for processing the type of gigantic data sets which are showing up in financial, health care, retail, transportation, and a host of other fields. All of this stems from the PureSystems being rolled out now, so you get not only the immediate benefits of this platform but a path forward into even more powerful computing applications as time goes on.
Of course, when this happens everyone will marvel at the incredible advances in multi-core processors, multi-thread software, and other fields. But let’s not forget the standards-based, high bandwidth, physical and virtual networks under the covers of these systems that will quietly be doing their part to revolutionize computing, yet again.
What do you think the future of networks, or video games for that matter? Share your comments below, or respond to my Twitter feed
Juniper endorses ODIN Approach
I’m pleased to report that Juniper Networks has publicly endorsed the open data center interoperable network (ODIN) approach to designing data center networks. If you've been following this blog, then you know that on May 8, IBM released a set of technical briefs describing ODIN during the InterOp conference in Las Vegas. This approach to using industry standards as the preferred means to designing data center networks has been supported by Juniper, as discussed in this blog post from Liz King, Vice-President of Global Alliances. Many thanks to Juniper for their support of open networking standards; I’m sure we’ll have more to say about how these solutions should be designed in the near future.
Marist College endorses ODIN
In addition to the many industry leading companies who have endorsed IBM's recently released technical briefs, describing an Open Datacenter with an Interoperable Network (ODIN), the first academic endorsement of ODIN has recently come from Marist College (Go Red Foxes !). In their endorsement, Marist notes that their support of ODIN was part of their broader efforts to insure that the next generation of technology students are prepared for the challenges which await them. Marist also cited their related work with the National Science Foundation funded lab for enterprise computing, their network interoperability lab, and their cloud computing computational resources. Also commenting on ODIN as part of their Twitter feed were IBM Vice President Ross Mauri (a member of the Marist Board of Directors) and Marist Vice President and Chief Information Officer Bill Thirsk. I'm sure there will be opportunities for IBM and other ODIN supporters to work with colleges such as Marist on research and interoperability that will benefit the open design principles set forth in the ODIN documents.
My Crystal Ball: Top 3 Predictions for 2013
As the year draws to a close, I predict there will be no shortage of article looking back at 2012 and ahead to 2013. And my humble blog is no exception.
Looking back on 2012, I’ve kept you up to date on the latest networking developments from IBM and the industry. I’ve also recently been blogging about the upcoming OFC/NFOEC conference in March 2013 (where I’m giving a tutorial presentation) on topics including software-defined networking (SDN), energy efficient networking, cloud computing, wavelength multiplexing, 100G networks, and more. And don't miss my latest webinar with Forrester Group on SDN and how it can make a big impact on your plans for next year.
Assuming the ancient Mayans were wrong and the world doesn’t end in 2012, here’s my top 3 predictions to keep in mind for the coming year, counting down in reverse order:
(3) Open standards will be the best way forward for data networking. This past year, the response to IBM’s Open Datacenter Interoperable Network (ODIN) was huge – over 20,000 views in first 2 weeks alone. Look for more industry standards and open source software to make an impact in 2013, including ODIN on steroids and lots more from academic partners like the Marist College SDN/OpenFlow lab.
(2) Data rates will continue to increase. OK, that was an easy one, but it’s happening faster than most people thought…10G links are ubiquitous, 40G links are going mainstream in 2013, and 100G for the data center is right around the corner in 2014.
(1) Network virtualization will be the next big thing. Software-defined networking (SDN) is being deployed by some large users (just ask Google), the OpenFlow 1.3 standard has been released, and overlay networks like DOVE are moving forward in the standards bodies
Looking ahead into the new year, I’ll be bringing you more tweets and blogs on the latest data center networking news, as well as new podcasts and webinars. Maybe I’ll see you at OFC 2013 for my tutorial, and watch for the next edition of my book in 2013. Or, you can catch me at any of the remaining Hudson Valley FIRST Lego League tournaments in January and February; we’ll be sending the Hudson Valley Champion to World Fest this year, and that team will be selected from a field of over 80 teams through six qualifying events held across New York’s Hudson Valley, including Troy, Ballston Spa, Poughkeepsie, LaGrange, Elmsford, and Sleepy Hollow tournaments. And, as always, you can keep the conversation going by commenting on my blog or Twitter. I’d like to wish all my readers a safe, happy, healthy holiday season and a prosperous new year.
NEC endorses ODIN
During the 2012 InterOp conference in Las Vegas, IBM introduced a set of technical briefs describing the path towards creating an Open Datacenter with an Interoperable Network (ODIN). The approach of using open industry standards in the data center network was recently endorsed by NEC Corporation on their corporate blog. In particular, NEC mentions IBM's work with the Open Network Foundation (ONF) and their efforts to create software-defined networking standards (including both OpenFlow and network overlays) for next generation data center networks. I'm very pleased by NEC's support for software-defined networking and other open standards in the data center network, stay tuned to this blog or my Twitter feed to hear more about this and related topics.
ODIN endorsed by Alcatel-Lucent
I’m pleased to report that Alcatel-Lucent has publicly endorsed the Open Data Center Interoperable Network (ODIN) approach to designing data center networks. As regular readers of my blog know, IBM released a set of technical briefs describing ODIN during the InterOp conference in Las Vegas earlier this year. This approach to using industry standards as the preferred means to designing data center networks has been endorsed in this post from Sam Bucci, a Vice-President at Alcatel-Lucent. Many thanks for this support of open networking standards; I’m sure we’ll have more to say about how to build these solutions with Alcatel-Lucent in the near future.
ODIN endorsed by Extreme Networks
Earlier today, IBM released a series of technical briefs describing the Open Datacenter Interoperable Network (ODIN) during InterOp. The ODIN approach to open networking has been endorsed by Extreme Networks, and you can read about it in their blog post. Both companies share a commitment to open industry standards within the data center network, an approach which should benefit clients with a lower total cost of ownership and superior performance.
ODIN Sets the Standard for Open Networking
"If you want to go quickly, go alone. If you want to go far, go together." – African proverb
During InterOp 2012 in Las Vegas, IBM released a set of five technical briefs which lay out the path towards creating an Open Datacenter with an Interoperable Network (ODIN). This approach uses industry standards as the preferred means to address key issues in next generation data center networking. The response has been tremendous, and ODIN has been very well received across the industry. I've been posting a lot about this in my blog lately, but for your convenience here's the current list of everyone who's endorsed ODIN so far, in no particular order:
Juniper Networks noted in an endorsement from their Vice President of Global Alliances that there is an unprecedented array of technical challenges which ODIN will address, including cost effective scaling, highly virtualized data centers, and reliable delivery of data frames.
Brocade said that “using an approach like ODIN…facilitates the deployment of new technologies”
Huawei said that “ODIN addresses best practices and interpretations of networking standards that are vital to efficient data center operation.” Also, Huawei Fellow Peter Ashwood-Smith shows an ODIN view of the future data center network in his webinar for Interop, entitled “How to prepare your infrastructure for the cloud using open standards.”
Extreme Networks said in their endorsement that “Having open, interoperable, and standard-based technologies can enhance (these) cost savings by allowing choice of best-of-breed technologies.”
NEC noted that software-defined networking (SDN) is part of ODIN, and has emerged as the preferred approach to solving Big Data and network bottleneck issues.
BigSwitch said in their blog “The Importance of Being Open” that “ODIN is a great example of how we need to maintain openness and interoperability in next generation networks”
Adva Optical Networking in their blog on "the missing piece in the cloud computing puzzle" talked about the role of ODIN in the wide area network, including both dark fiber solutions, MPLS/GPLS, and emerging trends using SDN to manage cloud computing and the WAN. They also cited recent SDN work with the Ofelia project in Europe as an example of ongoing work towards open standards in the WAN.
Ciena pointed out in a post from their CTO and Senior Vice-President that “the use of open standards has been one of the fundamental “change agents” in the networking industry”. These standards are “associated with encouraging creativity by enabling a diverse and rapidly expanding user group” and “generally support the most cost-effective scaling”. They called ODIN “a nearly ideal approach” and said that ODIN “is on its way to becoming industry best-practice for transforming data-centers”.
Marist College provided a university’s perspective, as their CIO noted that their support of ODIN was part of their broader efforts to insure that the next generation of technology students are prepared for the challenges which await them. Marist also cited related work with their National Science Foundation funded lab for enterprise computing and their cloud computing computational resources.
Thanks to everyone for showing your support of open industry standards and the ODIN approach to data center networking. I’m honored and humbled by this strong show of support from so many industry leaders, and I’m very excited to be taking the first steps with all of you on this journey towards a more open, interoperable data center network. As we continue to develop more content for ODIN, both around new standards as well as deeper technical descriptions of reference architectures which implement the ODIN design principles, I’ll keep you posted on further activities with these and other companies.
Would you like to be next to endorse ODIN, and receive eternal fame and glory by being mentioned in my blog ? Let me know where I can point to your endorsement, or drop me a line on my Twitter feed
SDN at the NSF ECC: or, the latest OpenFlow research
Guess I’ve been doing too much Twittering lately; for the acronym challenged, I’m talking about Software-Defined Networking research at the National Science Foundation’s conference on Enterprise Computing, hosted at Marist College (one of the prominent endorsers of the ODIN documents). This past week at the Enterprise Computing Conference has been really interesting; if you’ve never been to this event, I’d encourage you to consider it. Among other things, I learned about several recent case studies on proof points showing that IT education can lead to productive employment, and about a free IBM service that connects people with System Z skills to potential jobs. While there have been a lot of good talks, I’d like to spend some time today going over Marist’s contributions to software-defined networking and OpenFlow..
For those of you who don’t know what SDN and OpenFlow mean, beyond being some of the hottest buzzwords in the networking industry right now, you can check out the appropriate volume of the Open Datacenter Interoperable Network (ODIN) reference architecture for a detailed introduction to this topic and the problems it addresses. For those who just need a quick refresher, software-defined networking is an approach which allows the basic data flows through a switch to be manipulated through an external controller. It’s an industry standard approach being led by the Open Network Foundation (ONF), a consortium run by the world’s largest network users (Google, Facebook, Verizon, and more). OpenFlow is a relatively new industry standard which separates the data plane and control plane of a switch, creating flow table abstractions (in other words, you can match data flows based on content of the packets and perform actions associated with each flow match; if you don’t assign a flow, traffic can be blocked or filtered using this technique). Optimal paths through the network are defined by the OpenFlow controller, rather than some proprietary software within the switch.
One of the potential benefits of OpenFlow is that it allows you to innovate at Internet speeds, by just changing the software and not replacing or reconfiguring the switch hardware. There are still open questions about just how large an OpenFlow controller can scale, how many controllers we need, etc. Marist College has created an SDN lab which will contribute to the OpenFlow community, support research around SDN, and possibly support compliance testing in the near future. They are engaged with some large OpenFlow switch providers (including IBM) and some interested OpenFlow adopters (to be named later) to investigate use cases and performance limitations of the current OpenFlow protocol. Their current lab environment includes four IBM G8264 OpenFlow enabled 10/40G switches in a spine-leaf configuration, running under an open source FloodLight controller. These switches interconnect a server farm based on IBM x86, Power, and System Z enterprise servers. Many of the x86 servers run the VMWare hypervisor and the IBM 5000v virtual switch. The servers are connected via a separate Fibre Channel SAN to various enterprise storage devices.
One of Marist’s early contributions has been to create an open source FloodLight administrative control panel (FACP) that can be used for network administration. The FACP eliminates the need to write Python script to control the switches, thereby reducing management complexity. FACP provides an abstraction of the network, and a configuration application can be build against this abstraction. At the conference, Marist held a demo showing how this controller can provision quality of service and routing of Layer 2 & 3 VLANs in the network. Manipulation of firewall ACLs is also possible, and future extensions may include MPLS and other WAN related protocols. Ongoing work in this area is focused on creating a static flow pusher, which will allow a static programmable interface to write scripts which support flow tables across the network using the FloodLight rest API.
Further investigation will include such topics as demonstrating multi-vendor interoperability under a common FloodLight controller, and exploring the limits of scalability and security associated with OpenFlow networking. Keep up with their latest work and see their presentation from the NSF conference .
Want to suggest another TLA (three letter acronym) for my list ? Comment on this blog entry below, or drop me a line on my Twitter feed.