OpenFlow and Carrier Ethernet 2.0 to be the new buzzwords
Carrier Ethernet 2.0 and OpenFlow were the topics of discussion at the NetEvents IT forum, Hong Kong. By Pupul Dutta
Metro Ethernet Forum or MEF as it is popularly known launched the Carrier Ethernet 2.0 (CE 2.0) certification at a conference organized by Netevents in Hong Kong. CE 2.0 moves the ball forward by supporting multiple classes of services plus manageability across interconnected provider networks differentiating it from the simple standardized Ethernet service delivered over a single provider’s network now called Carrier Ethernet 1.0. The technology also allows operators to trace the path of a service on an end-to-end basis.
The certification is useful for both the subscriber and the service provider as it enables a mutual agreement on the standard set of techniques easing the implementation in real world networks. “It’s not the solution or the be all and end all technique but it is a step along the way to a seamless global Ethernet LAN and it comes in four flavors—E-line, E-LAN, E-Tree and E-Access,” said Daniel Bar-Lev, Director, Certification Programs, MEF.
Divesh Gupta, Assistant Vice President of Pre-sales, PCCW Global, said, “When you look at E-Access from a customer’s standpoint, it becomes important or rather necessary to provide end-to-end SLAs. In such scenarios, we provide the services over dedicated Ethernet or an E-Line or E-LAN that guarantees end-to-end latency, factor loss, availability and all kinds of SLAs that the customers demand of us.”
According to Gupta, unless a company provided the last leg of end-to-end connectivity it wasn’t good enough. “CE 2.0 certification will ensure that a service provider provides meaningful SLAs to the end customer,” he added.
Also, with a set standard, reliance on testing each time that a company connects to a new operator would reduce given the fact that both sides would be talking the same language.
Jim Machi, SVP Marketing, Dialogic Corporation, said, “With Ethernet 2.0, the mobile backhaul situation would improve considerably. Backhaul refers to the data on our phone, which goes to a mobile tower and subsequently on the network. Currently, most mobile carriers use TDM equipment which costs about $6-7 billion per year. By using Ethernet instead of TDM, they can guarantee a class of service, which is only possible through this initiative. Also, with the use of Ethernet in the backhaul, people will get better services.”
Basically, CE 1.0 defined how to deliver a standard Ethernet service so that all of the providers and equipment vendors knew what a service was—if it was point-to-point or multipoint-to-multipoint. CE 2.0 allows service providers and operators to utilize the infrastructure more efficiently.
“There’s a bit of a delta in complexity with this new standard but vendors and service providers profit a lot more. They compensate themselves by virtue of the fact that they can now deliver many more services over the same infrastructure much more effectively, more quickly and at a lower operating cost while at the same time adding higher-value services,” said Bar-Lev of MEF.
The conference also delved into OpenFlow technology based on which vendors like HP have launched network applications. The technology has the potential to automate configurations, improve network efficiency and reduces the total cost of ownership.
Bruce Bateman, Networking Evangelist APJ, Dell Force10, said, “The community is changing; the customers are asking for more openness and open standards; they are demanding products that have the ability to interact with other devices.”
Highlighting the problems faced by current day data centers, Dr. Atsushi Iwata, Assistant General Manager of Cloud System Research Laboratories, NEC, said, “The current problem in the data center market is that, due to the increasing number of virtualization software on the server side, dynamic network configuration to support these environments and flexible flat layer 2 network configuration among several racks is required. This has numerous limitations. The current data center also has to support a lot of tenancy (which provides isolation for servers, storage and network for each tenant) for different customers, and the current number of VLANs is limited to 4,000, whereas the customer wants to have more than 4,000 because of the flexibility and control of those tenants. All of this is not supported by the current system.”
According to Iwata, OpenFlow could solve this problem. SDN provides network virtualization capability that renders a slice of virtualized network from layer 2 to layer 4, and this can allow flat layer 2 networks for each tenant.
However, this is not easy to accomplish and migrating to a completely virtualized network would probably take a while. Bateman of Dell believed that the need for virtualizing the network would arise not so much from a technology perspective but from a financial point of view as an organization typically wouldn’t even know what kind of technology it is using or would be appropriate for its data centers. In fact, most of the time, they don’t know what they want; all they know is that their requirement is so and so and that these are the things that we don’t want.
“Data center owners currently want to reduce their OPEX and, by spending a little bit on CAPEX, they can reduce their operating costs. For this they need products that support new technologies such as OpenFlow, OpenStack and that’s where the change will begin to happen. It will take a few years as it is also a cultural issue,” explained Bateman.
“The other drivers for SDN or OpenFlow would be things like Big Data, Hadoop type environments, consumerization of IT, mobility etc. that are coming together and will drive a lot of change. So, the combination of these things, the need for bandwidth and smart application awareness in the network will drive the data center transition,” said Mark Pearson, Chief Technologist, Data Center and Core, Advanced Technology Group, Networking, HP.
Pearson concluded, “We need to think about the degree of openness from the device layer, through the controller layer and also from the end solutions perspective. Based on that, the enterprise-grade solution needs to fulfill, the complete testing, support processes and all those things that come together in the ecosystem.”