ODIN endorsed by Alcatel-Lucent
I’m pleased to report that Alcatel-Lucent has publicly endorsed the Open Data Center Interoperable Network (ODIN) approach to designing data center networks. As regular readers of my blog know, IBM released a set of technical briefs describing ODIN during the InterOp conference in Las Vegas earlier this year. This approach to using industry standards as the preferred means to designing data center networks has been endorsed in this post from Sam Bucci, a Vice-President at Alcatel-Lucent. Many thanks for this support of open networking standards; I’m sure we’ll have more to say about how to build these solutions with Alcatel-Lucent in the near future.
My Crystal Ball: Top 3 Predictions for 2013
As the year draws to a close, I predict there will be no shortage of article looking back at 2012 and ahead to 2013. And my humble blog is no exception.
Looking back on 2012, I’ve kept you up to date on the latest networking developments from IBM and the industry. I’ve also recently been blogging about the upcoming OFC/NFOEC conference in March 2013 (where I’m giving a tutorial presentation) on topics including software-defined networking (SDN), energy efficient networking, cloud computing, wavelength multiplexing, 100G networks, and more. And don't miss my latest webinar with Forrester Group on SDN and how it can make a big impact on your plans for next year.
Assuming the ancient Mayans were wrong and the world doesn’t end in 2012, here’s my top 3 predictions to keep in mind for the coming year, counting down in reverse order:
(3) Open standards will be the best way forward for data networking. This past year, the response to IBM’s Open Datacenter Interoperable Network (ODIN) was huge – over 20,000 views in first 2 weeks alone. Look for more industry standards and open source software to make an impact in 2013, including ODIN on steroids and lots more from academic partners like the Marist College SDN/OpenFlow lab.
(2) Data rates will continue to increase. OK, that was an easy one, but it’s happening faster than most people thought…10G links are ubiquitous, 40G links are going mainstream in 2013, and 100G for the data center is right around the corner in 2014.
(1) Network virtualization will be the next big thing. Software-defined networking (SDN) is being deployed by some large users (just ask Google), the OpenFlow 1.3 standard has been released, and overlay networks like DOVE are moving forward in the standards bodies
Looking ahead into the new year, I’ll be bringing you more tweets and blogs on the latest data center networking news, as well as new podcasts and webinars. Maybe I’ll see you at OFC 2013 for my tutorial, and watch for the next edition of my book in 2013. Or, you can catch me at any of the remaining Hudson Valley FIRST Lego League tournaments in January and February; we’ll be sending the Hudson Valley Champion to World Fest this year, and that team will be selected from a field of over 80 teams through six qualifying events held across New York’s Hudson Valley, including Troy, Ballston Spa, Poughkeepsie, LaGrange, Elmsford, and Sleepy Hollow tournaments. And, as always, you can keep the conversation going by commenting on my blog or Twitter. I’d like to wish all my readers a safe, happy, healthy holiday season and a prosperous new year.
Top Ten Must-Reads on IBM Networking Strategy
There’s been so much going on in the world of data networking lately that I hardly know where to begin. It feels like I’ve been living on Internet Time this year (maybe you have, too); it’s hard to believe it’s already most of the way through first quarter. So, while I usually don’t take this approach, I thought that the fastest way to get everyone up to date on all the latest networking news would be to let you pick your favorites from the list of my recent presentations, podcasts, and webinars.
For starters, I recently got back from the Open Network Exchange meeting in New York City, sponsored by Network World magazine in mid-February. I gave a talk on how software-defined networking is being used as part of the ODIN network architecture, including some thoughts on finding a standard definition for SDN (something even Bob Metcalf hasn’t been able to do).
I also spoke about how SDN disrupts existing markets, reviewed IBM’s early client adopters & the benefits they have realized, and offered a few thoughts on what the future holds. You can see my presentation, plus others from the conference, at this site:
Of course, there’s still a lot of debate among different parts of the industry regarding what SDN really means. In particular, the datacom and telecom worlds have surprisingly different perspectives on this issue. I recently participated in a roundtable discussion on this topic, along with representatives from Cisco, Juniper, Huawei, Alcatel-Lucent, MRV, AT&T, Verizon, Orange, Ericsson, Rad Data Communications, and the ONF; you can listen to the discussion here In the future we plan more of these round table discussions, leading up to the 2013 MPLS/Ethernet World Congress, so keep watching as the debate continues.
I still feel that network virtualization is the next big
thing in our industry, and software-defined networking has become one of the
hottest topics since the creation of Ethernet 25 years ago (if your memory doesn’t
go back that far, read the first chapter in your CCNA qualification guidebook
to see how the world used to be made up of private networks from IBM, DEC,
Xerox, and others). While SDN is almost
certainly over-hyped right now, I believe it’s nearing the peak of the Gartner
Group hype cycle, as evidenced by some early adopters who have found high value
use cases for this technology. To hear
more, listen to my podcast with Lippis Group on SDN enablement of next generation
data centers, recorded December 2012
While you’re on the Lippis Group website, if you still haven’t read my blog or the IBM System Networking website articles about the Open Datacenter Interoperable Network (ODIN), download my podcast
on this topic to get up to date on how ODIN is being applied at large data centers worldwide, and how it will continue to reflect changes in the networking community throughout this coming year.
If you’re a regular reader of my blog and Twitter feed, then
you know that I’m passionate about open standards. In fact, if somebody tries to tell you they
have SDN working in their data center today, but it only runs on their
equipment, don’t believe them…SDN only works when it’s part of a larger,
standards-based data center strategy. If
you’d like to read about that larger strategy, and how it relates to big data,
analytics, and other workloads, there’s a nice, short introduction in the new
IBM RedPaper Point of View (PoV) article series. Sponsored in part by the IBM Academy of
Technology, these new Redpapers bring you all the key facts for a quick
tutorial on a subject, and refer you to the much longer Redbooks for a
step-by-step cookbook on how to make them work for you. Redpapers are available on a wide range of
topics; for data networking, start with my PoV on data networking IBM Redbooks PoV publication, #redp-4933-00 ,
Interested in storage area networking, or wondering how the SAN is going to change in the future? I've been working on that question with some of our industry partners, including ODIN-endorser and leading SAN authority Brocade, who have also recently been qualified by IBM for extended distance backup solutions using SAN Volume Controller (SVC). To see how SVC handles long distance Fibre Channel applications and integrates with VMWare management solutions, check out our recent presentation from IBM SHARE (session 12735) on avoiding the fog and smog that can come with cloud networks.
Late last year, the governor of New York State announced the creation of a new, $3M Center for Cloud Computing and Analytics, based at Marist College. IBM has funded an SDN research lab which is affiliated with this group, and which will also be taking advantage of Marist’s membership in the Internet 2 consortium (regular blog readers will also recall that Marist is the first academic institution to endorse ODIN). While this program is still in its early days, Marist has successfully built an SDN testbed using the Floodlight controller, made contributions to the Floodlight distro, released an open source SDN dashboard tool called Avior, and begun to prototype SDN in a mainframe enterprise environment. The college recently presented a 90 minute, sold-out presentation on their SDN work at the TIP 2013 conference in Hawaii; if you didn’t get a trip to this tropical paradise to hear them, you can still find their presentation and summaries of their recent work
Did I hear someone ask how Google is using optical
technologies to add value in their data center networks? (yes, I have the technology to hear you
through my blog page, but if I told you how it works I’d have to kill
you). Anyway, some of my colleagues at
Google recently weighed in on this topic for Laser Focus World, and I was
subsequently invited to present a
webinar based on their work (with a few of my own recent accomplishments thrown
in). You might not agree with everything
they have to say (after all, very few of us are running a data center with
Google’s requirements), but it’s always interesting to hear one of the biggest
network operators on the planet talking about optical technology. You can listen to an on-demand playback of my webinar, which cites the original Google article.
Finally, you may have noticed that one of the largest conferences in the field, the Optical Fiber Conference & National Fiber Optic Engineers Conference (OFC/NFOEC) has invited me to blog
about some of the hot topics in the industry leading up to the March 2013 OFC meeting. I’ve been doing this for a few months now, on a wide range of topics including low power optical interconnects, optics for cloud computing and SDN, WAN interconnects, and next generation data centers.
You can find out more about speakers on these and other related topics by visiting the OFC cloud/datacom landing page
Also, I’ll be doing a live daily blog from OFC starting
March 17, so be sure to check this site for regular updates during the
conference. Or you can stop by &
visit me in person, either during my presentation for the OIDA workshop on
metrics for aggregated networks or my tutorial on optical interconnects for datacom on Tuesday, March 19. I’ll also be stopping by the Elsevier booth
on the trade show floor to check on plans for the fourth edition of my book, the
Handbook of Fiber Optic Data Communication, coming out later this year (but
that’s another blog….)
As I’ve said before, this is a very interesting time to be an optical network engineer. I hope that some of these recent articles appeal to you, and if there’s another topic you’d like to see me cover, drop me a line or send me a tweet (@Dr_Casimer). And if anybody would like to get together at OFC/NFOEC in Anaheim, be sure to let me know!
I’m pleased to report that BTI has become the latest company
to publicly endorse the Open Datacenter Interoperable Network (ODIN) approach
to designing data center networks. As regular readers of my blog know, IBM has released
a set of technical briefs describing ODIN, which provides an approach to using
open industry standards to create next generation data center networks. I’ve written, podcasted, and been interviewed
many times about ODIN, all of which is linked from my blog. This approach to using industry standards as
the preferred means to designing data center networks has been endorsed in this post from Chandra Pandey, Vice-President of Platform Solutions at BTI. Many thanks for this support of open
networking standards; I’m sure we’ll have more to say about how to create these
solutions with IBM and BTI technology in the near future.
Daylight Finally Breaks Through the Clouds
“To my surprise and my delight…the clouds burst to show daylight” - Coldplay
This past month, a lot of people have been asking me to comment on the rumors swirling around a possible IBM open source initiative for software-defined networking (SDN) called Daylight. And I mean a LOT of people, from the audience at the OFC/NFOEC conference, to the Wall Street banks who attended my talk at the Open Network Exchange in Manhattan, to my fellow networking engineers who participated in the online roundtable from the MPLS/Ethernet World Congress. Since I usually enjoy a good technical discussion, it’s been more than a bit frustrating that I couldn’t respond to these rumors directly before now. Waiting for the official announcement of this initiative was made even harder when I read some of the preliminary online speculation about Daylight that was either misinformed, misguided, or just plain wrong.
In any case, it was a tremendous relief for me when The Linux Foundation (the industry’s leading nonprofit organization dedicated to open source development) announced OpenDaylight this morning. So for everyone who’s been waiting along with me, let’s take this opportunity to clear the air about what OpenDaylight actually means for the data networking industry.
As I mentioned in my recent tutorial at OFC/NFOEC, major industry trends such as warehouse scale data centers, big data analytics, and cloud computing in the enterprise are driving companies to revisit their data center network designs. SDN has the potential to lower capital and operating expenses, increase efficiencies, and provide faster time to value in this rapidly changing environment. But in order to fully realize the potential value of SDN, as quickly as possible, we need to go beyond the product line of any one networking company. We need to create a development community that encourages rapid innovation from a broad range of stake holders serving a common goal. In short, we need to do for networking what Linux did for server operating systems.
OpenDaylight is an open source framework intended to accelerate adoption of Software Defined Networking and to create an open, transparent approach to SDN development. Just as the Linux community created a viable open source operating system, which matured until it was deployed on enterprise-class systems and mainframes, OpenDaylight will create programmable SDN abstractions for many different types of data networks. Much of the initial code will be contributed and supported by industry leading companies who have signed up as either Platinum or Gold members of the OpenDaylight project. I feel this is one of the strongest features of the project; OpenDaylight is not owned by any one company, although many industry leaders have committed both developers and funding to the effort. Besides IBM, founding members include BigSwitch Networks, Brocade, Cisco, Citrix, Ericsson, Juniper, Microsoft, NEC, Red Hat, and VMWare; other contributors include Arista, Fujitsu, Alcatel-Lucent, Intel, Dell, Hewlett Packard, Nuage, and Plumgrid. I’ve been working with many other IBMers for the past few months, talking with these companies and crafting a common perspective for the SDN open source community. You’ll note that a number of these companies had previously publicly endorsed IBM’s point of view regarding open networking standards (the Open Datacenter Interoperable Network, or ODIN). One of the five initial ODIN volumes deals with SDN and its implications for the data center; by definition, supporting OpenDaylight means supporting open networking standards, so it’s nice to see other companies joining this commitment to interoperable SDN networks.
This project is good news for anyone who’s been trying to implement the recent Gartner Group study, which effectively said that corporations who didn’t pursue a multi-vendor networking strategy were paying 15-25% more than necessary for their network, and thus failing to meet their fiduciary responsibilities. But SDN is about much more than just reducing your network operating expenses; it’s also a driving force for new applications and potentially new revenue streams. As I’ve said in my blog on many occasions, open standards and open source software are an excellent way to foster innovation. By supporting OpenFlow and other standards, OpenDaylight allows a global development community to innovate at the speed of software, just as we’ve seen for smart phones or tablet computers.
The first code from the OpenDaylight Project is expected to be available in 3Q this year, and will include an open controller, virtual overlay network, protocol plug-ins, and switch device enhancements. The code is independent of the network operating system, and is governed by best practices such as the Eclipse Public License (EPL) commonly used for Java. Just as in any open source community, companies are free to participate based on the merit of their contributions. For example, IBM plans to contribute an open source version of its Distributed Overlay Virtual Ethernet (DOVE) technology, which has been working its way through the IETF standards bodies for some time now. DOVE software runs on top of the existing network hardware infrastructure and virtualizes layer 2 and 3 network properties. This makes it possible to set up, manage, and scale virtual networks much faster than ever before. Some possible applications of DOVE include merging multiple data networks together (for example, when one company acquires another) or allowing highly virtualized servers to connect with merchant silicon switches (by abstracting the IP and MAC address tables).
I’ve said this before, but it bears repeating…it's a very cool time to be a networking engineer. I’m excited by the potential of OpenDaylight for cloud computing and other applications, and I’m looking forward to working with the global development community on bringing all the benefits of Linux and server virtualization to the data center network.
Still not sure how you feel about open source networking? Now that I can talk freely about OpenDaylight, drop me a line on Twitter @Dr_Casimer
From the Abstract to the Concrete: 5 Reasons why SDN makes a difference
I’m very pleased that the IBM Smarter Computing website has agreed to host my latest blog on why SDN makes a difference for your business. Check it out and let me know what you think, or drop me a line on Twitter (@Dr_Casimer).
The Cloudy with a Chance of SDN World Tour 2013
“So what…I’m still a rock star…and guess what, I’m having more fun…” words to live by from everyone’s favorite role models, Pink
I had an epiphany this past month, when my manager congratulated me on becoming a rock star. At first I wasn’t quite sure what to make of this. Sure, my classic rock playlists for the Hudson Valley FIRST Lego League tournaments have earned plenty of compliments, but that’s just because I was lucky enough to be raised in the 80s when the best rock music ever was being written (who knew, right?)
As it turns out, he wasn’t referring to my excellent taste in music, just the huge amount of traveling I’ve been doing lately. There’s been tremendous interest in software-defined networking (SDN) this year, and I’ve been going on tour to educate & inform people at a series of events sponsored by IBM and others. You might have seen me in New York City earlier this year, when I presented a two-day session for industry & academic users of SDN at IBM’s office on Madison Avenue. I was also in Chicago this past May for a briefing sponsored by Network World. This summer, you can catch my act in a city near you; sign up at the links below to attend for free, or to check out events you may have missed:
June 10 – Poughkeepsie, NY (at the annual NSF Enterprise Computing Conference, held at the recently established New York State Center for Cloud Computing & Analytics, Marist College. All the conference presentations are now available online).
July 9 – Toronto, Ontario, Canada (at the IBM Markham Innovation Center), with my special opening act from the Marist College SDN Lab, to get the crowd warmed up
August 14-15 – Boston, Mass. (an IBM briefing in Waltham, where my colleagues from CUNY will be opening for me with a talk on teaching SDN, then a stop at the SHARE conference that week)
August 28 – Philadelphia, PA (an IBM event at the Valley Forge Casino, with more participation from Marist & CUNY). They’ll be camping out in line to get tickets for this one, so be sure to book early!
August 29 – Washington, D.C. (another IBM event with the Marist and CUNY teams)
September 12 – New York City (for the Adva North America Partner Symposium)
And just like a real rock star, I get to call this an international tour because we have one stop in Canada! Although to be fair, you might have caught a colleague of mine presenting some new research on the European leg of our tour, at the IEEE ICC conference in Budapest, Hungary this past June, where we presented some of our recent work on orchestrating networks within and between multiple data centers. All these dates make it hard to keep track, maybe I should get some T-shirts printed up…
If you can’t catch me on tour this summer, don’t worry; rock stars also do videos, and I’m not going to be an exception. OK, so it’s more of an interview where I describe the IBM ODIN network reference architecture and OpenDaylight, but I still hope you’ll find it interesting. Check out my video here; I’m not promising that it will sync up perfectly with Dark Side of the Moon if you play it backwards at half speed, but you never know.
Or, if you’re really ready to rock out hard core, dare to check out my latest article on innovation, with my co-authors from the Harvard School of Business, in the March issue of the Journal of Innovation Management, Policy & Practice. It’s totally off the chain.
Of course, I don’t have any rock star groupies yet…which is where you, my loyal readers, come in. After you check out my talks, papers, or videos, drop me a line on Twitter (@Dr_Casimer) & let me know what you think about SDN, or what you’d like to see covered in my next blog. And don’t stop believin’ in the power of SDN as you continue your journey towards open source, virtualized networks.
SDN Takes Manhattan: IBM’s new live migration demo
From the Muppets to the Weeping Angels, many have tried to take Manhattan by storm in TV and film. This past month, IBM had an opportunity when we unveiled a new demo showcasing the power of software defined networking (SDN) and Network Function Virtualization (NFV) for both data centers and telecommunication providers.
This demo is the latest work from IBM’s collaboration with the New York State Center for Cloud Computing & Analytics at Marist College. Established earlier this year with a $3 M grant from the governor of New York, this center supports a wide range of projects which benefit businesses in New York State. One of these projects, sponsored in part by IBM’s university relations group and by other business partners including Adva Optical Networking, is the SDN / NFV test bed, located in the brand new Hancock Building on the Marist campus. This data center houses many different types of IBM servers, storage, and system networking devices (including IBM PureSystems and IBM G8264 switches), as well as optical wide area networking equipment from Adva (the FSP 3000 platform). This environment was used to develop a new, first of its kind deployment of SDN across multiple data centers separated by 125 km of optical fiber, in which the optical WAN equipment is also orchestrated by the same SDN controller. While current SDN standards only support Layer 2-3, there is an effort under way to standardize OpenFlow down to the physical layer, which is expected to make significant progress in 2014.
The Marist college faculty and students did an outstanding job writing several new applications for this environment, including an open source management graphical user interface called Avior, which controls OpenFlow devices from a mobile device such as a tablet or smart phone. Avior allows us to monitor network statistics, configure traffic flows, and administer firewall or load balancer properties, without requiring the network administrator to write complex Python code scripts. Avior incorporates a static flow pusher which can deploy pre-configured network profiles or schedule network configuration events for touch-free provisioning. A second application developed specifically for the Adva optical network (which the students have named Advalanche) is called by Avior to orchestrate the WDM equipment. We use the open source application Ganglia to monitor events such as server utilization or available memory; when a preset threshold is exceeded, Ganglia executes an action such as provisioning a network profile, migrating a VM, or cloning a VM. The SDN controller runs OpenFlow 1.0, and can be either the IBM PNC controller or an open source controller such as Floodlight (we plan to introduce Open Daylight into the test bed when it becomes publicly available this coming December).
In this demo, we demonstrated automatically triggering live VM migration (moving a VM while it continues to run uninterrupted) if the CPU utilization on the host server exceeded 75%. In this example, the VM was running a video streaming application, but in principle we could use any NFV application, such as a virtual firewall, router, or telecom function. Possible use cases for this function include multi-site load balancing, or cloud bursting between a private data center and a public cloud; this work may also have applications to the so-called “noisy neighbor” problem in public cloud deployments. Since network costs can be a significant fraction of the total cost for cloud computing services, this approach can also be used to provision dynamic workloads and improve utilization of the data networks.
When over-utilization of the server occurs, Ganglia triggers a VM migration. The SDN controller automatically provisions all the switches in the source and target data center, as well as wavelengths on the optical network (subject to available physical resources and workload priority levels). End-to-end network provisioning can be accomplished in about a minute, as compared with current approaches which can take days or weeks since they require manual intervention to configure both the data center network and the WAN. VMWare is then used to live migrate the VM from a server in the source data center to another server in the target data center (in the future we plan to use other hypervisors including KVM and PowerVM). As usual, migration time is a function of the size of the VM and the available network bandwidth; in our example, a 3 GByte VM live migrates in a few minutes (we have partnerships with other colleges in New York State which are developing models for VM migration time). Once migration is complete, the SDN controller reverts all the network devices to their original states, including the WAN.
This demo was presented for the first time at the Adva North America Technology Symposium, held just off Broadway in New York City on September 12. The response has been overwhelming; since we’ve taken Manhattan, we’ve gone on to showcase this work worldwide, ranging from major service providers who’ve come to visit Marist College to remove demos for interested people in Japan and Bratislava. If you happen to be near Frankfurt, Germany the week of October 15, stop by the SDN & OpenFlow World Congress to see Prof. Rob Cannistra from Marist presenting the demo (and don’t forget to visit the IBM and Adva presentations while you’re there).
Projects like this one demonstrate the disruptive effect SDN and NFV are having on the global market, and this is only the beginning. Now that we’ve started spreading the news about SDN in New York, I’ll keep you posted on our latest SDN research and other developments. You cal also read more about SDN and cloud computing in the recently released, 4th edition of my book, the Handbook of Fiber Optic Data Communication from Academic Press/Elsevier. If you know anyone who’d like to see this demo, or any companies / universities who would like to collaborate with the Marist lab, please drop me a line (email@example.com) or a message on Twitter (@Dr_Casimer).
From Wall Street to Watson – The 2013 Mark Luchinsky Memorial Lecture
This past week, I was honored to be invited to present the 2013 Mark Luchinsky Memorial Lecture at Penn State, and to meet with some of the faculty and students, including the Schreyer Honors College. This talk was presented as part of the lead-up for the 2014 Shaping the Future Summit. I enjoyed the opportunity to talk with students about technology issues, and since we didn’t have time to take everyone’s questions during the event, I’d like to use this blog to respond to several questions that came in through Twitter during & after the lecture.
Question: “When it comes to ethics in innovation, how do you balance when something created for good can be used for evil ? “
I see you’ve been reading Google’s list of 10 things they know are true (not doing evil comes in at number 6). While I personally like their technologist version of the Hippocratic Oath for engineers (first, do no harm), there are often large gray areas in practice (for example, should Google refrain from deploying their technology in nations which censor the Internet as a protest against that activity, or should they engage in these nations and try to improve things from within the system?). These are not easy questions. For some techniques on dealing with gray area thinking, I’d recommend the Presidential Leadership Academy at Penn State; I had the opportunity to meet with these students during my recent visit, and was quite impressed by their level of engagement and understanding. As a technologist, we have an ethical responsibility towards the technologies which we help create and deploy. However, often we can’t foresee exactly how a given technology will be used, or misused, in practice (there’s a famous story that AT&T originally didn’t want to file a patent application on the laser, because they couldn’t understand what it had to do with telephones). I’d encourage you to always act according to your values, and I try to approach my leadership role in the same way. For more on values-based leadership, you might consider the book “Leadership in my rear view mirror” by former IBMer Jack Beach.
Question: “ If technology is so important, why isn’t there a bigger push for it in grade school ? “
In some places, there is a big push (particularly in countries outside the U.S.). In many other places, more needs to be done in order to update the curriculum and reflect the recent, growing importance of technology (for example, some states have firm requirements for high school graduation in subjects such as health education and gym, but no requirements for basic technical literacy). There are many reasons for this. Technology advances much faster than most elementary and middle schools can keep up. Changing the curriculum statewide requires significant effort (though programs like IBM’s sponsorship of P-TECH are making a good start, as recognized recently by President Obama). There are always financial, political, and other barriers to overcome in the real world. There are many schools experimenting with a broader STEM education curriculum, and international FIRST Lego League is part of that effort. There is also a growing recognition that the rules of higher education are changing as well, including recent efforts to develop very large online classes (MOOCs) delivered at very low cost. I think it’s important to provide students with an understanding of technology as a life skill. Even if they don’t plan to become engineers when they grow up, they will still need to vote on issues affected by technology, science, and math, including whether to believe the statistics from the latest election polls, whether they want to eat meat that’s been irradiated as a preservative, or whether we should invest more in manned or unmanned space exploration.
Question: “ What role does disaster recovery play in modern data centers ? “
There’s a lot of interest in disaster recovery, following the events of Sept. 11, 2001, and the subsequent array of natural and man-made disasters which have threatened major global data centers. I work on enterprise-class disaster recovery for Fortune 500 clients using IBM’s System Z platform, and I developed our qualification program for optical networking companies who want to have their solutions tested under these demanding conditions. Although most companies recognize the need for both disaster backup and business continuity, sometimes we need to be reminded of the business impacts which can result if these solutions are neglected or not tested and updated frequently. Large data centers need to carefully consider how much data they can afford to lose, how fast they need key functions to come back online after a disaster, and other factors so they can appropriately size their recovery strategy. I’m currently working on ways to use software defined networking (SDN) to rapidly re-provision long distance optical networks from wireless mobile controllers (such as a smart phone), so that you can move critical data around while you’re evacuating from a disaster site. We have a demo of this running at the SDN Innovation Lab in the NY State Center for Cloud Computing & Analytics, Marist College, NY, for those who are interested. You can also read more about disaster recovery in the latest edition of my book (insert shameless self-interested plug here), the Handbook of Fiber Optic Data Communication (4th edition, Elsevier/Academic Press).
It’s always a pleasure to talk with students about the future of technology. If you’d like to continue the conversation, drop me a line on Twitter @Dr_Casimer