Software Defined Network, Telco Cloud & NFV


  • Multiple types of networks in SDN and their functionalities
  • Architecture and the working of an SDN
  • Modeling and programming techniques of an SDN
  • OpenFlow protocols, SDN-C, TL1 and Yang
  • Core Networks and their concept
  • ODL Community

Origin of SDN

  1. Martin Casado and his team proposed a clean-slate security architecture (SANE).
  2. The idea of software Defined Network originated from the OpenFlow project.
  3. Stanford published OpenFlow V1.0.0 specs. Martin co-founded Nicira.
  4. Open Networking Foundation takes shape. First networking summit
  5. VMware buys Nicira for 1.26B

Definition of SDN

SDN enables to respond quickly to changing business requirements through a centralized control Council.

SDN in Real World – Google’s Approach

It is worth noting that the industry was initially skeptical whether the implementation of SDN was possible the way Google wanted to implement it, what was the reason for the same.

Google had high financial costs involved in managing their data centers. There were many things to be taken care of, like hardware and software upgrade over provisioning, managing large backup traffic, managing individuals switch and workforce.

Hence it was time-consuming to rebuild the routing tables after link failures and also it was difficult to predict what the new network might perform.

Google went forward and successfully implemented SDN, they built their hardware and wrote their own software for all their internal data centers and surprised the industry when they announced that the use of ISDN is possible in production.

Virtualization is the basis of SDN

The concept of virtualization has prompted data centers to use resources only when they are actually needed. Virtualization can separate out the software from the hardware and this result is an SDN.

Legacy networks are available for the use of virtualization. Cloud and mobility create more complex environments.

Networks must be able to adapt themselves in terms of security scalability and manageability.


Applications or users require a reconfiguration which is time-consuming and expensive.

SDN offers programmatic control over both physical and virtual network devices.

OpenFlow or some other programmable and controllable packet/flow processing protocol can be used.

SDN takes a lesson from server virtualization and introduces an abstraction layer.

It separates network intelligence and configuration from physical connections and hardware.

SDN delivers speed and agility when deploying new applications and business services over existing ones. If you want stronger security, faster time to service, reduced operational overhead, and faster IT innovation, then:

  • An efficient solution would be that of Cisco application-centric infrastructure.
  • It offers an unmatched portfolio of IT solutions, together with leading IT technology partners.
  • It should address your need through SDN capabilities, open APIs, IT orchestration tools, and greater automation.

Cisco is one of the leading solution providers for SDN capabilities.

Software-defined network is not OpenFlow. OpenFlow is an open standard for communications protocol that enables the control plane to interact with the forwarding plane.

Benefits of software-defined networking:

  • Directly programmable
  • Centralized management
  • Reduce CapEx and OpEx
  • Deliver agility and flexibility
  • Enables innovation

Traditional Network System

  • We have a control plane, management plane, and a data plane.
  • These use protocols for execution.
  • The control planes are distributed.
  • Only an existing business can be evaluated or executed.
  • It possesses very less agility and flexibility.
  • Usage of ASICs that are customized is highly prevalent.

Difference between SDN and Traditional Network

In the traditional IP network, the control and data planes are tightly coupled and embedded in the same networking devices and the whole structure is highly decentralized. Computer Networks can be divided into three planes of functionalities:

  • Management. Includes the software services such as S and MP based tools used to remotely monitor and configure the control functionality. Network policy is defined in the management plane, controlling and forces the policy and data plan executed by forwarding the data accordingly.
  • Control Plane. Represents the protocol used to populate the forwarding table of the data applying elements.
  • Data Plane. The data plane cause responds to the networking devices which are responsible for efficiently forwarding data.

Traditional Network SDN
  • The network devices have a control plane which passes the information used to build a forward table.
  • They also consist of a data plane that consults the forwarding table so that the letter us used by the network device to make a decision on where to send frames or packets entering the device.
  • Both of these planes should exist directly on the networking device.
  • Whenever the business requirements change, the traditional network architecture finds itself in a fix a software-defined network the control plane.
  • The control plane functions on an SDN controller.
  • The controller communicates with a physical or virtual switch data plane through a protocol called as OpenFlow.
  • This conveys the instructions to the data plane o how forward data.
  • SDN creates a dynamic and flexible network architecture that can change as the business requirements change.
  • By using an API, your controller can implement network commands to multiple devices.
  • Can perform changes here following the programmable approach

Modeling Techniques of SDN

Quality functions like:

  • Extendibility
  • Application ecosystem
  • Application controllability
  • Supportability
  • Network Functionality
  • Programmability
  • Reliability
  • Visibility
  • Virtualization
  • Scalability
  • Performance
  • Security
  • Physical device
  • Virtual Device

In the SDN Solution Model, we generally speak about 3 levels namely:

  • Level 1: Business Level Canvas
  • Level 2: Unified Quality Model
  • Level 3: SDN Architecture

Different types of solution models are available, some of them are:

  • Use case based network management
  • A computational model for limb impedance control

Depending on the business level and the type of quality needed, one can deploy an SDN environment.

Dependability Model

From a quality of service (QoS) point of view, it is an SDN controller which is a complex system.

Its operations can be highly dependent on a variety of parameters.


  • It is a degree of distribution
  • The corresponding topology
  • The number of network devices to control

Programming Techniques of SDN

  • We make use of language which has logic and mathematics built in it.
  • Natural languages are not used.
  • A formal verification of the syntax is used.

Frenetic and NetCore:

  • This is a high-level programming language that can be used to write OpenFlow applications running on top of NOX.


  • This performs symbolic execution of OpenFlow applications and applies model checking to explore the state space of an entire OpenFlow.
  • This is a proactive approach that figures out invalid system states by using a simplified OpenFlow switch model.
  • Not used in real time.


  • This uses HASKELL language.
  • If a program is incomplete or has malicious entities, then the entire network gets affected.
  • Hence, a formal verification is done.

A common framework has:

  • A formal specification and programming approach
  • A verification approach
  • An implementation methodology approach

Here, a three-tier architecture is used.

In Tier 1, we make use of forwarding entities and any software/hardware components comprising of them.

In Tier 2, the control and management entities dot the details in Tier 1 are employed.

In Tier 3, the applications and services that make use of the details in Tier 1 and 2 are employed.

There are no loops or black holes in the network.

The configurations are such that there is no breakdown of an application execution on the respective network.

The requirements of SDN programming should support formal semantics in high-level languages.

Should support conceptual models that can exist on the network freely.

Pyretic is a Python base language.

All popular programming languages can be used In SDN products like C in OVS and Kandoo, C++ in NOX and ONIX etc.

The most used scripting language in SDN is Python

On the Frenetic project, we have Pyretic.

OpenFlow Protocols

It is the standard defined for SDN

It enables the controller which is the brain of the SDN network to directly interact with the forwarding plane of the network devices.

Devices can be switches and routers, both physical and virtual.

It can better adapt to changing business requirements.

Any device that wants to communicate with an SDN controller must support the OpenFlow protocol.

The controller manages the traffic (network flows) by manipulating the flow table at switches. Instructions are stored in the flow tables.

When the packet arrives at the switch, header fields are matched with flow entries in flow table.

If any entry matches, the switch performs indicated actions and update the counters.

If entries do not match, switch asks controller by sending a message with the packet header.

OpenFlow Table: Basic Actions

  • All: To all interfaces, except the incoming interface
  • Controller: Encapsulate and send to the controller
  • Local: Perform actions in the next flow table
  • In-Port: Sends back to the input port
  • Normal: Forwards using the traditional Ethernet
  • Flood: Sends along minimum spanning tree

Switches that support OpenFlow


  • Allows separation of control and data planes
  • Centralized of control
  • Flow-based control
  • Takes advantage of the routing tables in Ethernet switches and routers
  • Sometimes in SDNs, the control plane gets separated from the physical network and placed into a centralized controller
  • These controllers use OpenFlow to communicate with all components on the network.
  • This approach helps in the entire system working together rather than as a single unit.
  • The controller and switches communicate via the OpenFlow protocol.

Why we should use the OpenFlow protocol?

One school of thought feels that an OpenFlow SDN would make network programmability necessary to better manage the server virtualization environments.

OpenFlow Protocol

  • When an administrator moves a virtual machine from one server to another, the network must be able to automatically adjust to the VLANS, quality service policy and access control lists.
  • In a traditional network, this process does not take place.
  • This is not good and hence, changes are affected manually.
  • OpenFlow protocols do this without any problems.
  • Google was the pioneer to build SDN switches.

OpenDaylight (ODL) Community

ODL Platform enables network services across a spectrum of hardware in multi-vendor environments. It is a common platform that can be configured to solve today´s requirements, integrating open source, open standards, and open APIs.

ODL makes the network more programmable, intelligent and adaptable.

The ODL project which is an open source project was hosted by the Linux Foundation, its purpose was to advance an SDN adoption and to create a basis for a strong network functions virtualization in 2013.

This project does not necessarily have to run on Linux systems only. OpenFlow is a primary example of an SDN protocol supported by this OpenDaylight project.

ODL platform for NFV Project (OPNFV) will promote an open source network that brings companies together to accelerate innovation.

The principal objectives of the platform for NFV will also include:

  • Creating an integrated and verified open source platform.
  • Providing a proactive cooperation of all end users.
  • Forming an open environment for NFV products.
  • Contributing and engaging in open source projects.

Which is Better? SDN or NFV

Both approaches are complementary to each other.

SDN separates the network´s control (brains) and forwarding (muscle) planes to provide a centralized view of the distributed network. This result in a more efficient orchestration and automation of all network services.

NFV focuses on optimizing the network focuses on optimizing the network services themselves. It decouples the network functions, such as DNS, caching, etc., from proprietary hardware appliances.

Fundamentals of Virtualization

Virtualization refers to the creation of a virtual machine that can virtualize all the hardware resources, including processors, memory, storage, and network connectivity. Physical hardware resources can be shared by one or more virtual machine.


Server. The hardware that comprises the hard disks, RAM, CPU & network connectivity, power supply, CMOS battery. Consider it as a high-end PC.

Hypervisor. Hypervisor enables communication between hardware and a virtual machine so that the virtualization accomplishes with this abstraction layer. Hypervisor segregates a physical server into multiple logical servers. These logical servers behave as independent physical machines.

Guest OS. It is OS running on the Virtual Machine, the Guest OS behaves as though it is running an independent physical machine but in fact, the physical machine is a logical server created on top of hypervisor.

Application. These are the software installed on the Guest OS.

NIC (Network Interface Card or the Ethernet adapter). The hypervisor segregates the physical NIC into logical interfaces called virtual NICs. At a hardware level, it is still a single NIC carrying the traffic of all the vNICs.

Virtual Machine (VM). It is created on top of the hypervisor with the required hardware resources. A self-contained software emulation of a machine, which does not physically exist, but shares resources of an underlying physical machine.

Types of Virtualization

Based on hardware or the function that is being virtualized, the different types of virtualization are:

Server Virtualization. This is what we discussed in the first slide. Physical server divided into multiple logical servers.

Application Virtualization. In this type of virtualization, it is not the server that is being virtualized, but it is the application that is virtualized. A remote machine connects to the server and uses the application. Dynamic link library programs redirect all the virtualized application´s calls to the server’s file system. When software runs from the server in this manner, no changes are made to the local computer´s operating system or registry.

Hosted Virtualization. It requires you to install an operating system before deploying the hypervisor, means that the hypervisor doesn´t have direct access to hardware resources. These kinds of hypervisors are often cheaper that bare-metal, so they´re well-suited for small data centers or test and development environments.

Storage Virtualization. Instead of inserting a bunch of hard disks into each server, storage virtualization keeps the hard disks separate from the servers.

Fundamentals of Network Virtualization

The objects of virtualization are basically two:

  1. Device virtualization
    • Virtualize physical devices (nodes) in the network
      • Data Plane virtualization
      • Control Plane virtualization
      • Management Plan virtualization
  2. Data path virtualization
    • Virtualize communication path between network access points
      • Links virtualization

Device Virtualization

  • The memory of frame forwarding silicon is divided between VRFs
  • Router interfaces are bind to specific VRFs
  • Each VRF contains one routing and one forwarding table
  • No virtualization of the router management:
    • One CLI

Data Path Virtualization

We need to use some other network technology to share a link between many VRFs traffics. We can use:

  • VLANs
  • MPLS
  • GRE tunnels
  • IP-in-IP

Network Virtualization Advantages

Resilience.- Virtual network will automatically redirect packets by redundant links. In case of disaster, the virtual network can be easily recreated on new physical infrastructure.

Security.- Increased data traffic isolation and user segmentation. A virtual network should work with firewall software.

Network Functions

Network Functions in Applications are:

  • Routing
  • Firewalling
  • Load balancing
  • Network Address Translation (NAT)
  • Access Gateway
  • WAN acceleration
  • QoE monitoring
  • Deep packet inspection (DPI)
  • Broadband Remote Accessing (BRAS)
  • Session Border Controlling

Overview of NFV Architecture

VNF (Virtual Network Function)

A VNF is a basic block in NFV Architecture. It is the virtualized network element. For example, when a router is virtualized, we call it Router VNF; another example is base station VNF. Even when one sub-function of a network element is virtualized, it is called VNF. For example, in the router case, various sub-functions of the router can be separate VNFs which together functions as a virtual router.

EMS (Element Management System)

This is the element management system for VNF. This is responsible for the management of VNF operation, in the same way as physical network elements are managed by their respective EMS of VNFs (for example fault and performance management). This may manage the VNFs through proprietary interfaces. There may be one EMS per VNF or an EMS can manage multiple VNFs. EMS itself can be a VNF.

VNF Manager

A VNF Manager manages a VNF or multiple VNFs i.e. it does the life cycle management of VNF instances. Life cycle management means setting up/maintaining and tearing down VNFs. A VNF manager can do the same functions as EMS but through open interface/REference point proposed in NFV architecture. The interface/reference point is named Ve-Vnfm.

NFV (Network Function Virtualization Infrastructure)

NFVI is the environment in which VNFs run- This includes Physical resources, virtual resources, and virtualization layer.

Virtual Computer, Virtual Memory and Virtual Networking Resources

This is the virtual part in NFVI. The physical resources are abstracted into virtual resources that are ultimately utilized by NFVs.

Virtualization Layer

This layer is responsible for abstracting physical resources into virtual resources. The common industry term for this layer is “Hypervisor”. This layer decouples software from hardware which enables the software to progress independently from hardware.

VIM (Virtualized Infrastructure Manager)

This is the management system for NFVI. It is responsible for controlling and managing the NFVI compute, network and storage resources within one operator´s infrastructure domain. It is also responsible for the collection of performance measurements and events.

NFV Orchestrator

Generates, maintains and tears down networks services of VNF themselves. If there are multiple VNFs, orchestrator will enable the creation of end to end services over multiple VNFs.

OSS/BSS refers to OSS/BSS of an operator

OSS deals with network management, fault management, configuration management and service management.


OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system (

Is a free and open-source software platform for cloud computing, mostly deployed as an infrastructure-as-a-service (IaaS). The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center.

Provisioning for Entirely Orchestrated Virtualized Network

Traditional Provisioning Approach

OpenStack Provisioning Approach


  • Virtualization refers to the creation of a virtual machine that can virtualize all of the hardware resources, including processors, memory, storage, and network connectivity.
  • Based on hardware or the function that is being virtualized the different types of virtualization are: Server Virtualization, Application virtualization, Hosted Virtualization, Storage Virtualization, and Network Virtualization.
  • In device virtualization, Memory (where routing and forwarding tables are storage) or frame forwarding silicon is divided between VRF.
  • In Software forwarding technique, any frame forwarding done by the network hardware can be implemented in the software.
  • OpenStack is essentially a cloud operating system.
  • VM template is a virtualization advantage wherein an entire application along with the OS can be stored as a backup.

You may also like

No results found.