I’m pleased to report that BTI has become the latest company to publicly endorse the Open Datacenter Interoperable Network (ODIN) approach to designing data center networks. As regular readers of my blog know, IBM has released a set of technical briefs describing ODIN, which provides an approach to using open industry standards to create next generation data center networks. I’ve written, podcasted, and been interviewed many times about ODIN, all of which is linked from my blog. This approach to using industry standards as the preferred means to designing data center networks has been endorsed in this post from Chandra Pandey, Vice-President of Platform Solutions at BTI. Many thanks for this support of open networking standards; I’m sure we’ll have more to say about how to create these solutions with IBM and BTI technology in the near future.
Data Center Networking
Casimer DeCusatis 2700058MPY firstname.lastname@example.org Tags:  #network #cloud #odin #ibmodin #sysnet #networking 722 Visits
Casimer DeCusatis 2700058MPY email@example.com Tags:  #sysnet #networking #odin #ibmodin 1 Comment 2,154 Visits
Daylight Finally Breaks Through the Clouds
“To my surprise and my delight…the clouds burst to show daylight” - Coldplay
This past month, a lot of people have been asking me to comment on the rumors swirling around a possible IBM open source initiative for software-defined networking (SDN) called Daylight. And I mean a LOT of people, from the audience at the OFC/NFOEC conference, to the Wall Street banks who attended my talk at the Open Network Exchange in Manhattan, to my fellow networking engineers who participated in the online roundtable from the MPLS/Ethernet World Congress. Since I usually enjoy a good technical discussion, it’s been more than a bit frustrating that I couldn’t respond to these rumors directly before now. Waiting for the official announcement of this initiative was made even harder when I read some of the preliminary online speculation about Daylight that was either misinformed, misguided, or just plain wrong.
In any case, it was a tremendous relief for me when The Linux Foundation (the industry’s leading nonprofit organization dedicated to open source development) announced OpenDaylight this morning. So for everyone who’s been waiting along with me, let’s take this opportunity to clear the air about what OpenDaylight actually means for the data networking industry.
As I mentioned in my recent tutorial at OFC/NFOEC, major industry trends such as warehouse scale data centers, big data analytics, and cloud computing in the enterprise are driving companies to revisit their data center network designs. SDN has the potential to lower capital and operating expenses, increase efficiencies, and provide faster time to value in this rapidly changing environment. But in order to fully realize the potential value of SDN, as quickly as possible, we need to go beyond the product line of any one networking company. We need to create a development community that encourages rapid innovation from a broad range of stake holders serving a common goal. In short, we need to do for networking what Linux did for server operating systems.
OpenDaylight is an open source framework intended to accelerate adoption of Software Defined Networking and to create an open, transparent approach to SDN development. Just as the Linux community created a viable open source operating system, which matured until it was deployed on enterprise-class systems and mainframes, OpenDaylight will create programmable SDN abstractions for many different types of data networks. Much of the initial code will be contributed and supported by industry leading companies who have signed up as either Platinum or Gold members of the OpenDaylight project. I feel this is one of the strongest features of the project; OpenDaylight is not owned by any one company, although many industry leaders have committed both developers and funding to the effort. Besides IBM, founding members include BigSwitch Networks, Brocade, Cisco, Citrix, Ericsson, Juniper, Microsoft, NEC, Red Hat, and VMWare; other contributors include Arista, Fujitsu, Alcatel-Lucent, Intel, Dell, Hewlett Packard, Nuage, and Plumgrid. I’ve been working with many other IBMers for the past few months, talking with these companies and crafting a common perspective for the SDN open source community. You’ll note that a number of these companies had previously publicly endorsed IBM’s point of view regarding open networking standards (the Open Datacenter Interoperable Network, or ODIN). One of the five initial ODIN volumes deals with SDN and its implications for the data center; by definition, supporting OpenDaylight means supporting open networking standards, so it’s nice to see other companies joining this commitment to interoperable SDN networks.
This project is good news for anyone who’s been trying to implement the recent Gartner Group study, which effectively said that corporations who didn’t pursue a multi-vendor networking strategy were paying 15-25% more than necessary for their network, and thus failing to meet their fiduciary responsibilities. But SDN is about much more than just reducing your network operating expenses; it’s also a driving force for new applications and potentially new revenue streams. As I’ve said in my blog on many occasions, open standards and open source software are an excellent way to foster innovation. By supporting OpenFlow and other standards, OpenDaylight allows a global development community to innovate at the speed of software, just as we’ve seen for smart phones or tablet computers.
The first code from the OpenDaylight Project is expected to be available in 3Q this year, and will include an open controller, virtual overlay network, protocol plug-ins, and switch device enhancements. The code is independent of the network operating system, and is governed by best practices such as the Eclipse Public License (EPL) commonly used for Java. Just as in any open source community, companies are free to participate based on the merit of their contributions. For example, IBM plans to contribute an open source version of its Distributed Overlay Virtual Ethernet (DOVE) technology, which has been working its way through the IETF standards bodies for some time now. DOVE software runs on top of the existing network hardware infrastructure and virtualizes layer 2 and 3 network properties. This makes it possible to set up, manage, and scale virtual networks much faster than ever before. Some possible applications of DOVE include merging multiple data networks together (for example, when one company acquires another) or allowing highly virtualized servers to connect with merchant silicon switches (by abstracting the IP and MAC address tables).
I’ve said this before, but it bears repeating…it's a very cool time to be a networking engineer. I’m excited by the potential of OpenDaylight for cloud computing and other applications, and I’m looking forward to working with the global development community on bringing all the benefits of Linux and server virtualization to the data center network.
Still not sure how you feel about open source networking? Now that I can talk freely about OpenDaylight, drop me a line on Twitter @Dr_Casimer
Casimer DeCusatis 2700058MPY firstname.lastname@example.org Tags:  #ibm #odin #standards #interop #networking #ibmodin 700 Visits
NEC endorses ODIN
During the 2012 InterOp conference in Las Vegas, IBM introduced a set of technical briefs describing the path towards creating an Open Datacenter with an Interoperable Network (ODIN). The approach of using open industry standards in the data center network was recently endorsed by NEC Corporation on their corporate blog. In particular, NEC mentions IBM's work with the Open Network Foundation (ONF) and their efforts to create software-defined networking standards (including both OpenFlow and network overlays) for next generation data center networks. I'm very pleased by NEC's support for software-defined networking and other open standards in the data center network, stay tuned to this blog or my Twitter feed to hear more about this and related topics.
Casimer DeCusatis 2700058MPY email@example.com Tags:  #standards #networking #odin #interop #ibm 621 Visits
ODIN endorsed by Extreme Networks
Earlier today, IBM released a series of technical briefs describing the Open Datacenter Interoperable Network (ODIN) during InterOp. The ODIN approach to open networking has been endorsed by Extreme Networks, and you can read about it in their blog post. Both companies share a commitment to open industry standards within the data center network, an approach which should benefit clients with a lower total cost of ownership and superior performance.
Casimer DeCusatis 2700058MPY firstname.lastname@example.org Tags:  #nsf #networking #sdn #sysnet #marist #interop #ecc #odin #ibmodin 1,319 Visits
SDN at the NSF ECC: or, the latest OpenFlow research
Guess I’ve been doing too much Twittering lately; for the acronym challenged, I’m talking about Software-Defined Networking research at the National Science Foundation’s conference on Enterprise Computing, hosted at Marist College (one of the prominent endorsers of the ODIN documents). This past week at the Enterprise Computing Conference has been really interesting; if you’ve never been to this event, I’d encourage you to consider it. Among other things, I learned about several recent case studies on proof points showing that IT education can lead to productive employment, and about a free IBM service that connects people with System Z skills to potential jobs. While there have been a lot of good talks, I’d like to spend some time today going over Marist’s contributions to software-defined networking and OpenFlow..
For those of you who don’t know what SDN and OpenFlow mean, beyond being some of the hottest buzzwords in the networking industry right now, you can check out the appropriate volume of the Open Datacenter Interoperable Network (ODIN) reference architecture for a detailed introduction to this topic and the problems it addresses. For those who just need a quick refresher, software-defined networking is an approach which allows the basic data flows through a switch to be manipulated through an external controller. It’s an industry standard approach being led by the Open Network Foundation (ONF), a consortium run by the world’s largest network users (Google, Facebook, Verizon, and more). OpenFlow is a relatively new industry standard which separates the data plane and control plane of a switch, creating flow table abstractions (in other words, you can match data flows based on content of the packets and perform actions associated with each flow match; if you don’t assign a flow, traffic can be blocked or filtered using this technique). Optimal paths through the network are defined by the OpenFlow controller, rather than some proprietary software within the switch.
One of the potential benefits of OpenFlow is that it allows you to innovate at Internet speeds, by just changing the software and not replacing or reconfiguring the switch hardware. There are still open questions about just how large an OpenFlow controller can scale, how many controllers we need, etc. Marist College has created an SDN lab which will contribute to the OpenFlow community, support research around SDN, and possibly support compliance testing in the near future. They are engaged with some large OpenFlow switch providers (including IBM) and some interested OpenFlow adopters (to be named later) to investigate use cases and performance limitations of the current OpenFlow protocol. Their current lab environment includes four IBM G8264 OpenFlow enabled 10/40G switches in a spine-leaf configuration, running under an open source FloodLight controller. These switches interconnect a server farm based on IBM x86, Power, and System Z enterprise servers. Many of the x86 servers run the VMWare hypervisor and the IBM 5000v virtual switch. The servers are connected via a separate Fibre Channel SAN to various enterprise storage devices.
One of Marist’s early contributions has been to create an open source FloodLight administrative control panel (FACP) that can be used for network administration. The FACP eliminates the need to write Python script to control the switches, thereby reducing management complexity. FACP provides an abstraction of the network, and a configuration application can be build against this abstraction. At the conference, Marist held a demo showing how this controller can provision quality of service and routing of Layer 2 & 3 VLANs in the network. Manipulation of firewall ACLs is also possible, and future extensions may include MPLS and other WAN related protocols. Ongoing work in this area is focused on creating a static flow pusher, which will allow a static programmable interface to write scripts which support flow tables across the network using the FloodLight rest API.
Further investigation will include such topics as demonstrating multi-vendor interoperability under a common FloodLight controller, and exploring the limits of scalability and security associated with OpenFlow networking. Keep up with their latest work and see their presentation from the NSF conference .
Want to suggest another TLA (three letter acronym) for my list ? Comment on this blog entry below, or drop me a line on my Twitter feed.
Casimer DeCusatis 2700058MPY email@example.com Tags:  #networking #ibmsysnet #ibmodin #sdn #odin #ibm 2,012 Visits
Software Defined Networking and 100G are hot topics at symposium
Last week, I presented some of my work at the annual North America Technology Symposium sponsored by Adva Optical Networks, at the Millenium Broadway Hotel in New York City. Regular readers of my blog will recall that Adva is among the many companies who have publicly endorsed the Open Datacenter Interoperable Network (ODIN), which is IBM’s vision for next generation data networking. While the Adva symposium was admittedly focused mostly on their solutions, there were many interesting presentations of general interest. The presentations are available here, but Id like to review the highlights in this blog.
The symposium was a full day event, opening at 9 AM with a welcome from Adva’s CEO, Brian Protiva, followed by a series of invited speakers. After lunch, two breakout sessions were offered for either enterprise or carrier networking (I chose the enterprise track, which focused on 100G metro and 16G Fibre Channel SAN). The day wound up with a Q&A session and dinner.
First, one of the Vice-Presidents from Verizon discussed adoption of software defined networking to create what he called “service aware networks”. He cited several examples of how carriers and ISPs can leverage SDN to grow top line revenue and provide value added services. We heard many interesting factoids during the day, such as how Internet content consumptions, driven by video, will grow 10X by 2017. It was clear that commoditized hardware and centralized software control were creating an interesting value proposition in this market.
Later, my presentation dealt with using SDN to help manage the combination of exponential capacity growth and declining margins faced by many ISPs and cloud providers. In the past, networking was all about how quickly you could deploy, scale, and manage infrastructure to create value. Networking equipment was relatively feature-poor; a combination of low bandwidth, customer ASICs with low functionality and immature protocols in the data and control planes made it difficult to realize a higher value proposer. That’s all changing; modern Ethernet bandwidth is approaching that of a computer back plane, flatter 2 tier Clos or mesh topologies offer better performance, and protocols like TRILL and OpenFlow coax more value from merchant silicon. In this age of network affluence, users demand a higher quality of service, including bandwidth and latency guarantees, turnkey provisioning, and application aware network optimization. This leads to a new value proposition form SDN networks; virtualization of the data center network and beyond is the next big frontier. This value doesn’t’ come with out challenges though. Current OpenFlow is driving us back to a centralized management framework, and scaling is an issue. The benefits of a flow switched topology are clear, however, and include standardized, fine grain flow control, rapid application deployment, and end-to-end performance guarantees – real value that clients are willing to pay for.
From Michael Haley, IBM Distinguished Engineer, we learned more about how cloud computing is changing the world. Market dynamics are volatile; look t the world’s 10 largest companies from the year 2000,k and you’ll only recognize 2 of the same names on the list today. But the market potential is also growing; there will be over a trillion devices connected to the Internet next year. And smart enterprise CEOs are leveraging hybrid cloud in 60% of their installations (up from 33% just tow years ago). Cloud adoption is driven by workloads, and some analysts believe that a major section will be compute as a service, representing over 20% of a projected $55B market in 2014. Business analytics, social business, telecommunications, banking, mobile video, healthcare, and utilities dominate in different markets worldwide. In China, for example, strategic investments in “cloud cities” have been launched to support a major part of their current $4 Trillion, 5 year economic plan.
Supporting an earlier assertion about bandwidth growth, the enterprise breakout sessions included a live demonstration of 100G metro Ethernet and a discussion of the power, space, and cost for such solutions. Using a network test set and traffic source, Adva showed how they can transport 100G over metro distances with appliances about 1 U high that fir into a standard 19 inch equipment rack. Various cable configurations and latency measurements were also made during this presentation.
I’m sure by now you’ve got the general idea…Bandwidth is exploding in the metro area at lower cost than ever before, driven by new applications such as cloud computing and the promise of software-defined networking. Put on your shades, the future for metro optics looks bright indeed.
Are you looking forward to high bandwidth optical links in the metro cloud, or are you just blinded by the light? Drop me aline or send me a tweet @Dr_Casimer if you’d like to discuss more.
Casimer DeCusatis 2700058MPY firstname.lastname@example.org Tags:  #odin #sysnet #networking #ibmodin #ibm 1,121 Visits
Top Ten Must-Reads on IBM Networking Strategy
There’s been so much going on in the world of data networking lately that I hardly know where to begin. It feels like I’ve been living on Internet Time this year (maybe you have, too); it’s hard to believe it’s already most of the way through first quarter. So, while I usually don’t take this approach, I thought that the fastest way to get everyone up to date on all the latest networking news would be to let you pick your favorites from the list of my recent presentations, podcasts, and webinars.
For starters, I recently got back from the Open Network Exchange meeting in New York City, sponsored by Network World magazine in mid-February. I gave a talk on how software-defined networking is being used as part of the ODIN network architecture, including some thoughts on finding a standard definition for SDN (something even Bob Metcalf hasn’t been able to do). I also spoke about how SDN disrupts existing markets, reviewed IBM’s early client adopters & the benefits they have realized, and offered a few thoughts on what the future holds. You can see my presentation, plus others from the conference, at this site:
Of course, there’s still a lot of debate among different parts of the industry regarding what SDN really means. In particular, the datacom and telecom worlds have surprisingly different perspectives on this issue. I recently participated in a roundtable discussion on this topic, along with representatives from Cisco, Juniper, Huawei, Alcatel-Lucent, MRV, AT&T, Verizon, Orange, Ericsson, Rad Data Communications, and the ONF; you can listen to the discussion here In the future we plan more of these round table discussions, leading up to the 2013 MPLS/Ethernet World Congress, so keep watching as the debate continues.
I still feel that network virtualization is the next big thing in our industry, and software-defined networking has become one of the hottest topics since the creation of Ethernet 25 years ago (if your memory doesn’t go back that far, read the first chapter in your CCNA qualification guidebook to see how the world used to be made up of private networks from IBM, DEC, Xerox, and others). While SDN is almost certainly over-hyped right now, I believe it’s nearing the peak of the Gartner Group hype cycle, as evidenced by some early adopters who have found high value use cases for this technology. To hear more, listen to my podcast with Lippis Group on SDN enablement of next generation data centers, recorded December 2012
While you’re on the Lippis Group website, if you still haven’t read my blog or the IBM System Networking website articles about the Open Datacenter Interoperable Network (ODIN), download my podcast on this topic to get up to date on how ODIN is being applied at large data centers worldwide, and how it will continue to reflect changes in the networking community throughout this coming year.
If you’re a regular reader of my blog and Twitter feed, then you know that I’m passionate about open standards. In fact, if somebody tries to tell you they have SDN working in their data center today, but it only runs on their equipment, don’t believe them…SDN only works when it’s part of a larger, standards-based data center strategy. If you’d like to read about that larger strategy, and how it relates to big data, analytics, and other workloads, there’s a nice, short introduction in the new IBM RedPaper Point of View (PoV) article series. Sponsored in part by the IBM Academy of Technology, these new Redpapers bring you all the key facts for a quick tutorial on a subject, and refer you to the much longer Redbooks for a step-by-step cookbook on how to make them work for you. Redpapers are available on a wide range of topics; for data networking, start with my PoV on data networking IBM Redbooks PoV publication, #redp-4933-00 ,
Or, if you’d like a slightly longer discussion on this topic, look no further than the Winter 2012 issue of Enterprise Tech Journal for my article “Getting the most from your data center network”.
Interested in storage area networking, or wondering how the SAN is going to change in the future? I've been working on that question with some of our industry partners, including ODIN-endorser and leading SAN authority Brocade, who have also recently been qualified by IBM for extended distance backup solutions using SAN Volume Controller (SVC). To see how SVC handles long distance Fibre Channel applications and integrates with VMWare management solutions, check out our recent presentation from IBM SHARE (session 12735) on avoiding the fog and smog that can come with cloud networks.
Late last year, the governor of New York State announced the creation of a new, $3M Center for Cloud Computing and Analytics, based at Marist College. IBM has funded an SDN research lab which is affiliated with this group, and which will also be taking advantage of Marist’s membership in the Internet 2 consortium (regular blog readers will also recall that Marist is the first academic institution to endorse ODIN). While this program is still in its early days, Marist has successfully built an SDN testbed using the Floodlight controller, made contributions to the Floodlight distro, released an open source SDN dashboard tool called Avior, and begun to prototype SDN in a mainframe enterprise environment. The college recently presented a 90 minute, sold-out presentation on their SDN work at the TIP 2013 conference in Hawaii; if you didn’t get a trip to this tropical paradise to hear them, you can still find their presentation and summaries of their recent work.
Did I hear someone ask how Google is using optical
technologies to add value in their data center networks? (yes, I have the technology to hear you
through my blog page, but if I told you how it works I’d have to kill
you). Anyway, some of my colleagues at
Google recently weighed in on this topic for Laser Focus World, and I was
subsequently invited to present a
webinar based on their work (with a few of my own recent accomplishments thrown
in). You might not agree with everything
they have to say (after all, very few of us are running a data center with
Google’s requirements), but it’s always interesting to hear one of the biggest
network operators on the planet talking about optical technology. You can listen to an on-demand playback of my webinar, which cites the original Google article.
You can find out more about speakers on these and other related topics by visiting the OFC cloud/datacom landing page
Also, I’ll be doing a live daily blog from OFC starting March 17, so be sure to check this site for regular updates during the conference. Or you can stop by & visit me in person, either during my presentation for the OIDA workshop on metrics for aggregated networks or my tutorial on optical interconnects for datacom on Tuesday, March 19. I’ll also be stopping by the Elsevier booth on the trade show floor to check on plans for the fourth edition of my book, the Handbook of Fiber Optic Data Communication, coming out later this year (but that’s another blog….)
As I’ve said before, this is a very interesting time to be an optical network engineer. I hope that some of these recent articles appeal to you, and if there’s another topic you’d like to see me cover, drop me a line or send me a tweet (@Dr_Casimer). And if anybody would like to get together at OFC/NFOEC in Anaheim, be sure to let me know!