Choosing the Right N(etwork) for NFV
There is a general consensus on the goals of network function virtualization (NFV): to improve agility and speed of service delivery, lower costs, and allow service providers to keep pace with the demands on their network. NFV accomplishes this by helping service providers apply IT principles to roll out new services as VMs on standard commodity servers — hence decoupling the services from the underlying hardware. The idea is it improves agility, speed of service delivery, lowers cost and allows service providers to keep pace with the demands on their network.
I will explore the role of network infrastructure in the successful rollout of network function virtualization. As a refresher, here is an outline of the NFV architecture that highlights the role of the network in that context.
Even if customers are able to successfully deploy virtual network functions (VNF) on an IT compute infrastructure, lack of attention toward underlying network infrastructure can cause delays and jeopardize the goal of achieving the benefits of NFV. It is important that the customer chooses an SDN controller which not only works well with the VIM layer, but can also provide the following added benefits:
- Support open, commodity network switches for physical layer
- Automate the virtual network and also map the virtual network context to the physical network
- Provide granular visibility, analytics, as well as end-to-end troubleshooting when required for both the OpenStack admin (Cloud admin / VIM layer admin) and the physical network admin
Now, let’s look at how Big Switch’s Big Cloud Fabric (BCF) helps achieve these 3 goals:
- Open Ethernet Switches: An important goal of NFV is to eliminate the use of proprietary hardware and instead use commodity hardware for the VNFs. That principle does not solely apply to the compute and storage hardware, but also extends to networking hardware as well. Big Switch has spearheaded efforts to support its software across multiple vendors including Dell, HPE, and Edgecore networks. This provides carriers with a much needed freedom to choose between any hardware platform to roll out their NFV infrastructure — much in the same way that they can choose between any type of compute for deploying the VNFs.
- Network Automation and Virtual to Physical Network Mapping: BCF uses a combination of OpenStack neutron plugin and Big Switch’s user space agent along with OVS kernel module to listen to all the network adds, changes, and deletions in OpenStack. Not only does it provision the virtual network on OpenStack compute nodes, but it also provisions physical networks with the right project (tenant)and network(segment) information. That way, physical networks can keep pace with cloud administration, and changes are carried out concurrently in the virtual and physical domain.
- Visibility and Troubleshooting: The most significant benefit of BCF architecture is the unprecedented visibility and troubleshooting it provides to track paths between VNFs that include virtual and physical switches. That troubleshooting interface is available to both the network and the cloud admin and shows the same output to validate information while troubleshooting between multiple teams. This is arguably one of the biggest gaps in most deployments and one of the hardest to solve. Naturally, customers get excited when they are finally provided with a solution to what I call a “visibility gap” with NFV.
It is extremely important to consider these points when choosing the N(etwork) for NFV. To learn more about BCF and how it can help you improve agility and speed of service delivery, lower costs, and ability to keep pace with the demands on their network, we worked with ACG Research to understand how a Tier 1 service provider did a comparison based on extensive analysis and testing to deploy NFV at scale in its national services infrastructure. To read the report: Creating Agility & Efficiency at Scale: The Economic Advantages of Open Architecture Platforms in NFV Deployments
Director, Systems Engineering at Big Switch Networks