Platform Actively Measuring Web Traffic from Fog Nodes

Noriaki KamiyamaNTT Network Technology Laboratories

In recent years, a large part of the Internet traffic has been dominated by HTTP traffic. However, it was reported that two-thirds of the users encountered slow websites every week, and about half of the users abandoned websites after experiencing performance issues. It was also reported that Amazon increased revenue 1% for every 0.1 second reduction in web-page load time. Modern webpages consist of many various data objects, and servers providing each object have been diversified, e.g., objects of advertisements are obtained from dedicated servers. As a result, the objects that construct one website have been diversified. Hence, to improve the quality of the Internet by efficiently delivering web traffic, continuously monitoring the communication structure of web traffic is one of the most important tasks. One of the possible approaches for ISPs is an active measurement of web traffic, i.e., measuring the web traffic generated by periodically accessing various websites from probing nodes or terminals. For the purpose of improving the user-perceived quality, we need to grasp the communication structure of web traffic when accessing the locations close to end users.

Recently, the concept of Fog computing which extends the Cloud computing paradigm to the edge of networks has gathered wide attention as the distributed-computation platform for wireless sensors. Fog computing is a highly virtualized platform that provides compute, storage, and networking services at Fog nodes located between end devices, e.g., wireless sensors, and traditional Cloud computing data centers. Therefore, we can expect to use the Fog computing as a platform measuring the communication structure of web traffic which is close to the one observed by end users. In this paper, we propose a platform continuously measuring the communication structure of web traffic based on Fog computing. Figure 1 illustrates the proposed platform of actively measuring the communication structure of web traffic using Fog computing. The aim of this measuring platform is to measure the generated communication pattern, i.e., the distance of accessing hosts, as well as the communication quality, i.e., RTT and delay, when accessing web sites.

Based on the measurement configuration set by the network operator, each Fog node measures various properties of web traffic generated by accessing a variety of websites as shown in Fig. 2. Each Fog node periodically accesses the Cloud data center to update its probing configuration to the latest one set by the network operator as shown in Fig. 2(b). On the basis of the probing configuration, each router sequentially accesses multiple websites according to the URL list obtained from the probing database as shown in Fig. 2(c) and measures various properties of web traffic. Then each Fog node sends the obtained statistical data on web traffic properties to the Cloud data center which analyzes the collected data as shown in Fig. 2(d). To understand the tendencies of difference of each property when accessing websites from various locations, webpages are classified into some groups based on the geographical tendencies of each property at the Cloud data center. We experimentally evaluate the proposed measuring platform with regarding 12 hosts of PlanetLab as the Fog nodes, i.e., proving nodes.

back to tutorials ^

 

Virtual network embedding under multiple control policies

Masahiro KOBAYASHI, Ryutaro MATSUMURA, Takumi KIMURA, Toshiaki TSUCHIYA, and Katsunori NORITAKE, NTT Network Technology Laboratories

In a network virtualization environment, physical network resources are prepared by network providers and then shared by a number of virtual networks (VNs) so that each VN can meet the service level required by service providers. This resource management, called “virtual resource embedding” (VNE), is often solved by using mathematical optimization techniques with various objective functions such as resource consumption, service quality (delay/loss), reliability, and energy consumption. Conventional methods often seem to be based on a single objective function over the entire network, and such an objective function will not change. We consider the case in which objective functions may change in some local part of a network depending on the time and situation, e.g., temporal traffic overload caused by an event and natural disasters. In our proposal, a physical network is divided into several subnetworks (domains), and a policy of resource management (objective function) is assigned to each domain. An individual VN is assigned physical resources under different policies along its path (from start to end point), while the end-to-end service level is not violated. Through simulation, we compare our proposed method with the existing ones and verify its effectiveness from the viewpoint of network capacity, i.e., available number of VNs.

back to tutorials ^

 

Quality Measurement of Over-the-Top Video at Massive Scale

David T. Kao, Time Warner Cable

Over-the-Top (OTT) video is the single largest demand driver for Internet traffic growth worldwide. In North America, OTT video accounts for about 2/3 of the peak hour fixed access subscriber bound traffic. Due to OTT video's sensitivity to network impairments, congestion, and capacity, and its substantial volume in the overall Internet traffic, OTT video metrics can be used as a proxy of the overall subscriber Quality of Experience (QoE).

While OTT video metrics can be easily measured by the content providers and distributors, it is much harder for an Internet Service Provider (ISP) to do that at scale. Some content providers, such as Netflix and Google, do make selected OTT video metrics available to the public, but only in highly aggregated forms.

To better plan for capacity augmentation and to increase the subscriber QoE, it is important for an ISP to measure OTT video metrics objectively and quantitatively. Measuring OTT video metrics cost effectively, granularly, and at scale, has been a grand challenge for an ISP however.

This presentation focuses on adapting Netflow as a technology to measure average bitrate, the most widely used OTT video quality metric, at a massive scale: for every single video stream of every single subscriber in the ISP network. One key enabler of such massive scalability is sampled flow. We will examine the sampled flow methodology in the context of sampling rate and calibration, and the study the trade-off between data fidelity and system scalability. The proposed approach can be readily extended to video metrics beyond the average bitrate.

back to tutorials ^

 

Network Health Assessment – Using Big Data to Perform Network Diagnosis and Predict Future Issues

Qin Wu, Huawei

Big Data is rapidly becoming a part of our lives. What was once a store of data that was processed laboriously and off line (for example, to determine shopping habits of a population, or to understand correlations of medical conditions) has become an online, dynamic tool for prediction and cause analysis.

In networking, Big Data can be a tool for many applications and to answer many questions. Just as online analysis of patient data (heart rate, blood pressure, temperature, etc.) can be used to determine trends and so predict medical problems, diagnose illnesses, and find causes of sickness, so dynamic processing of network data can provide a Network Health Indicator.

Big Data in networking means the processing of all data about all aspects of the network. That means traffic demands, throughput, link and node loads, temperatures, application characteristics, CPU loading, memory utilisation, jitter, delay, loss, and all other measures of quality of experience and network resource performance. But for this data to be meaningfully correlated it has to be measured, recorded, collected, and accurately timestamped.

In this presentation we will examine some of the key measurements, how they can be collected and correlated, and how they can be used to understand the status of the network and predict future trends.
The presenter will define the Network Health Indicator as a numerical indication of the degree of network anomaly, a measurement of the network service quality, and a tool for performing root cause analysis. He will show how this can be constructed from information such as:

The severity level and frequency of network incidents
The available protection levels of nodes and links
The presence of backup paths
How much of the network is overloaded or congested
The complexity of the network
The perceived performance of the network.
One example of how a Network Health Indicator can be represented to a human user is a curve that shows the measure of network anomaly as it changes over time. Another is a radar chart plotting multiple aspects of the network's performance.
This presentation will also consider how Network Health Indicators can be compared with Network KPIs and contracted SLAs, how future risks to the network may be determined, how impending crises (such as meltdown or DDoS attack) may be spotted, and how remedial actions can be triggered.

back to tutorials ^



 

 

Introduction

bijan jabbari, Isocore

 

back to program ^

 

Keynote Speech

Dave Ward, Ken Gray, Cisco

 

back to program ^

 

Fault Detection and Active Monitoring in NFV Networks

Gurpreet Singh, Spirent

NFV based networks bring us the promise of improved service agility, elastic scaling, centralized and automated provisioning with the intent of reducing both CAPEX and OPEX and at the same time increase competitiveness in the market by introducing new services faster for the Service Providers. At the same time NFV based networks are characterized with shared infrastructure across virtual network functions and dynamic migration of VNFs across physical infrastructure resources to optimize resource provisioning and utilization. These characteristics of NFV based networks presents us with challenges in fault detection and performance monitoring that are unique to NFV based networks both in lab as well as live environments. In this presentation, we will discuss the key aspects involved in fault detection and performance monitoring for NFV based networks.

back to program ^

 

Opening Speech

 

 

back to program ^

 

 

Operation support system: future direction

Tomohiro Otani, KDDI Labs

Communication Service Providers (CSP) are traditionally sticking to operation support systems (OSS) on top of EMS/NMS for their network operational efficiency and consistency. Recently, by adoption of SDN/NFV, the network elements and architecture are being transformed, leading to simultaneous enhancement of network manageability beyond existing network technologies and their OSS. In addition, new technologies to CSP industry, such as big data, probing and real-time analytics are started to be adopted and expected to change the traditional operation's environment in CSP. In this presentation, the trend of OSS development is reviewed referring to activities of SDOs. Afterwards, future direction of OSS and operation environment will be discussed.

back to program ^

 

What do we do with complexity of disaggregation? Is data-driven approach worth to pursue?

Kohei Shiomoto, NTT

NFV and SDN is a key trend for future internet. softwarizing network function running on commodity hardware has a number of benefits: cost reduction, elastic capacity, quick deployment of new function. Control plane is separate from hardware data plane, enabling customized policy and routing logic implemented. On the other hand those benefits could not be obtained without sacrifice. Network device is disaggregated into components, which introduces increased complexity caused by interactions among components. Moreover each component is frequently and quickly replaced with newer one as new features are developed and released.

Considering increased complexity caused by interactions among components, we could not rely on traditional approach, where we build an analysis model of entire system by piecing together analysis models of accurate mechanisms or workflows of individual components. Rather we have to rely on holistic approach, where we grasp an entire system as a whole. we build an analysis model of an entire system from inputs and outputs relationship. we measure and collect measurements of inputs and outputs of the entire system. Analyzing the relationship of inputs and outputs we build up the model to operate, administrate, and maintain the entire system.

Machine learning is promising approach for this purpose. instead of relying on rule-based mechanism, we build a inputs-outputs relationship from measurements. such data-driven approach has been successfully developed in NTT R&D: syslog analytics, trouble ticket analytics, twitter analytics, etc. machine learning has been successfully employed for classification, clustering, regression, factorizing.

In this talk we examine how those machine learning methods fit to network management problem. We also discuss breakdown of network management task: detect, analyze, and action. We also discuss research frontier for autonomic network management driven by artificial intelligence including machine learning, deep neural network.


back to program ^

 

NFVI & Orchestration requirements for vCPE deployments

Azhar Sayeed, Redhat

vCPE is emerging as the top use case for NFV deployments. Service Providers have already experimented with some trial deployments on vCPEs for business and residential. Then, there is the CORD (Central Office Re-directed as Data Center) use case for vCPE being discussed by AT&T and ON.Lab. For vCPE deployments, NFVI layer must provide the right hooks/APIs to perform lifecycle management of VNFs/VMs including provisioning, monitoring and troubleshooting. This presentation provides an overall architecture for vCPE service deployment, discusses the requirements and challenges for the NFVI layer especially w.r.t performance needs from servers, KVM, and provides a reference automation and orchestration framework for faster, easier deployment of vCPE.


back to program ^

 

 

Opensource - A compelling case for Service Providers

Rob Wilmoth, Redhat 

There are at least 100+ open source initiatives that are relevant to Service Providers today. Increasingly SPs themselves are spawning new initiatives - be it AT&T E-COMP or TIP (Telecom Infra Project). While some open source initiatives are in its infancy others are fairly mature. The question is why is Opensource important to Service Providers? How does it change the architecture of their current infrastructure (DC, Network) ? How mature is it and who is deploying it? Where are they deploying it? What impact will projects like Open Compute have on future of data centers? This presentation discusses these questions and provides answers and examples of why open source is the biggest revolution in this industry.


back to program ^

 

“Unikernels” Meet NFVs: A Step Towards Nano-Services

Wassim Haddad, Ericsson

In this talk, we describe a new standalone software platform for unifying automation, orchestrating as well as “stitching” together a designated set of “network function virtualization (NFVs)”. In order to achieve our goals, our proposed architecture leaps past existing virtualization technologies by introducing the concept of “nano-NFVs” to describe highly specialized, immutable, secure and scalable NFVs with a much smaller memory footprint and stronger tenant isolation. Using the concept of “unikernel” as the main building block.

Current telco and IT markets trends point towards two key intertwined technologies. In order to reduce complexities and boost performance, there is a strong desire to migrate from virtual machines towards micro-services enablers, namely containers (e.g., Docker). On the other side, and always in the context of NFVs, it is evident that without an intelligent traffic steering between different VMs and/or containers, none of these virtualization techniques would be viable in a large scale NFVs deployment (e.g., virtual CPE, 4G/5G end-to-end slicing, IoT, etc).

There are multiple advantages behind adopting containers as they offer higher density, single operating system and faster start/shutdown. However, the ever-growing kernel complexities coupled with the requirement to ensure strong isolation between different apps have been frequently cited as hurdles towards wide adoption.

Our platform relies on unikernel virtualization technology as a key ingredient to offer “zero footprint” orchestration and “traffic steering” capabilities to the designated set of nano-NFVs. Leveraging unikernel features enable operators and cloud providers to implement a more granular, highly secure, flexible scale up/down of on-demand services (e.g., per user and/or per device and/or per flow) thus, resulting in a better use of their datacenter infrastructure. In our talk, we discuss architecture, challenges, performance and ways forward.

back to program ^

 

Towards a simplified IP Network Configuration and Self Organization

Kenji Fujisawa, NICT

TBD

back to program ^

 

The Orchestration Stack, Microservices and NFV

Tom Nadeau, Brocade

At present many open source efforts are underway focused on the problem of orchestration - at all levels of the stack. This presentation will first describe the stack of components needed from bottom to top. We will then touch on the different open source efforts around these components as well as the overall orchestration space. Finally, we discuss and dissect the taxonomy of services components whether they are virtualized monoliths, virtual network components/functions, or micro-services chained together. This discussion includes discussion of how virtual machines, physical machines, and how they are overlaid onto the various orchestration options. Along the way we will examine whether or not the ETSI NFV model fits the bill in this contemporary context or not.

back to program ^

 

 

Optical- TBD

Hiroaki Harai, NICT

TBD


back to program ^

 

Evolution of the vTDF for Dynamic Security

Scott Poretsky, Allot

NFV offers network operators a new way to design, deploy, and manage networking services. By decoupling the network functions from proprietary hardware appliances, network operators can accelerate the roll-out of new services quickly and cost-effectively. NFV enables operators to reset the cost base of their network operations and create the flexible service delivery environments they need to drive revenue and reduce costs. NFV also enables new kinds of complex services that were previously impossible to support. 3GPP has standardized the vTDF (virtual Traffic Detection Function) to provide application-based policy enforcement and charging. The vTDF provides real-time chaining decisions made per flow on the granular subscriber level with application-awareness to Layer-7, including HTTPS encrypted traffic. The vTDF can be extended beyond the 3GPP standards to integrate multiple virtualized network functions (VNFs) that deliver security and analytics. The vTDF’s service chaining capability delivers a tight a coupling of security and analytics virtual network functions for enhanced functionality and performance. The vTDF can offer network-based real-time security services such as Content Filtering (CF), Anti-Malware (AM), and Distributed Denial of Service (DDoS) protection with precise granularity and high data throughput. The dynamic scale of the vTDF and preciseness of the DDOS mitigation inherently limits collateral damage to innocent bystanders. The result is a network that maintains performance under attack without change to subscriber quality of experience (QoE) as services are consumed. By tightly coupling the analytics function, the operator can be alerted to attacks in real-time and monitor subscriber QoE for the attack duration. The vTDF is able to automatically scale to the capacity, performance, and functions needed to support the dynamic demands of the network.

back to program ^

 

Automating Network Configuration Using YANG Data Models

Santiago Alvarez, Cisco

This session describes how model-driven APIs facilitate device automation. Network operations are being transformed by software automation and data analytics. APIs derived from YANG models allow the network programmer to focus on the underlying manageability data framework of the device and abstract modeling language, protocols, transports and encodings. This session will explain the power of model-driven APIs and how to get started with those APIs using two open source projects.

back to program ^

 

Self Learning and Self Organizing Networks

TBD, TBD

TBD

back to program ^



 

 

NFVI vs VNF vs MANO validation-- Is there a method to this madness?

Rajesh Rajamani, Spirent

TBD


back to program ^

 

Utilizing I2RS to Control SDN and NFV Networks

Susan Hares, Huawei

The Interface to Routing Systems (I2RS) creates a new IETF standard for a dynamic interface to the routing system that is high bandwidth and programmatic. This talk reviews the status of the I2RS principles, status of standardization, and early code development in IETF Hackathons and open source (OPNFV, ONOS, ODL).

Networks use the routing systems to: a) distribute topology and network metadata, b) calculate best paths or Traffic Engineering (TE) paths using network metadata, and c) communicate decisions about the forwarding planes. The routing processes that forward IP or optical traffic may be co-located with the routing system in SDN/NFV networks, or located in a centralized data center attached to the network. The I2RS protocol facilitates real-time or event-driven interaction with the routing system through interfaces designed for highly configurable bandwidth, data retrieval of topology and other state, and policy filters. The I2RS process allows I2RS Agents on a routing system to interact with I2RS clients running on management systems or application systems.

The I2RS protocol consists of several component protocol channels bonded together into a high-layer protocol operating on ephemeral data models in the routing system. I2RS bonds extended versions of existing protocols together in order to build a highly reliable and programmatic interface. Ephemeral state in data models is configuration and operational states which do not survive a reboot. In contrast to ephemeral state, configuration state in a routing system survives a reboot, having been stored on NVRAM or hard disk, and session state created by OSPF, ISIS or BGP peering disappears after the peer goes down. I2RS protocol channels include:

• Configuration for ephemeral data models for all facets of routing system protocols, forwarding routes, policy, flow filters, and security filters – using extensions to NETCONF/RESTCONF protocol and Yang data models,

• Publication and subscription service – for push and pull subscriptions of data. (draft-ietf-netconf-push describes an extension to NETCONF family to push publication of publish events, large data streams, traffic statistics, and more).

• Traceability (using extensions to existing syslog or other tracing formats), • Meta-Interfaces to protocol-independent ephemeral data (I2RS Data Models for Ephemeral RIB, Topology information, and filter-based RIB) in order to stitch together services,

• I2RS also supports the above interfaces applied to security network devices (I2NSF) in an NFV network, and provides an extension to existing transport protocols in order to support security publication of data during DDoS or network security incident attacks (DDoS threat signal (DOTS), and Managed Incident Lightweight Exchange protocol extensions).

After covering the I2RS protocol, this presentation will review I2RS Yang models (protocol-independent, BGP, OSPF, flow-filters) and links to existing SDN/NFV control work (CCAMP, ALTO, PCE, Flow-Filters, and others). Lastly, this talk will provide information about open source for I2RS that operators can use to try out the I2RS protocol


back to program ^

 

The Open Source Ecosystem around NFV/SDN and what is AT&T Open Source strategy?

Toby Ford, AT&T

This presentation will answer the following questions.
Why should operators consider using Open Source software and hardware?
What options are currently available to solve practical problems?
What is AT&T’s strategy on Open Source?
How is AT&T applying Open Source to it’s transformation?

back to program ^

 

Requirements for Building a Virtualized Central Office

Alan Sardella, Linux Foundation

The central office (CO) is ripe for virtualization. And there are many projects, both open source and commercial, that virtualize network functions, eliminating the need to install physical devices for such devices as access or edge routers, optical line termination equipment, and broadband gateways. But what is the role of the controller in a virtualized central office, and what are the necessary orchestration components to make it all work together? This session will discuss everything from the NFV underlay to the SDN controller and the VNF managers/orchestrators needed to build a next generation, virtualized CO.


back to program ^

 

 

SDN/SDTN for Containerised Applications in NFV edge computing

Hyde Sugiyama, Red Hat

In near future based on current industry trends, User's traffic can be terminated at Telco edge node and the user's real­time application(such as VoD and IoT integration gateway) can be delivered from many distributed Telco edge nodes rather than big central data­center. User's applications will be able to run on many containers accommodating in a virtual machine or a physical machine and the containerized applications on top of OpenShfit container platform (Kubernetes & Docker concept) can run on OpenStack­based NFV platform and on other cloud infrastructure. The user can dynamically change topology for the user’s containerized applications in the virtualized infrastructure in which can build fabric infrastructure for the container. Service providers will need to manage the virtualization infrastructure for user’s containerized applications. This session talks the requirements of "SDN/SDTN" technology to support the containerized application­aware network on top of NFV edge computing.


back to program ^

 

The Path Computation Element as a Centralized Controller in advanced SDN and NFV Operations

Adrian Farrel, Old Dog Consulting

It is relatively well understood that the Path Computation Element (PCE) is an essential component of Software Defined Network (SDN) systems. In particular, PCE has a role in determining paths across small domains managed by Network Controllers and in computing paths over larger networks controlled by Network Orchestrators.
Indeed, the PCE is essentially a broadly scoped tool that can apply algorithms to traffic demands and functional objectives to determine how best to utilize network resources. This scope is potentially so wide that PCE is considered a component of SDN control of all manner of networks (optical, MPLS, segment routing, etc.) as well as a solution to planning and delivering Service Function Chaining (SFC) and Network Function Virtualization (NFV).

However, the PCE Protocol (PCEP) was originally designed and specified solely to allow MPLS routers to request the PCE to compute paths for Label Switched Paths (LSPs). Over time, PCEP has been extended to make it capable of suggesting new paths for existing LSPs and even for causing new LSPs to be instantiated. But these functions on their own do not make PCEP a very useful protocol in SDN except for some specific cases with MPLS or GMPLS signalled networks.

This talk will discuss recent proposals to extend PCEP to enable its use in a range of different network technologies in networks operating with and without a control plane. This approach uses an architecture where PCE, instead of being a component of a controller or orchestrator, is the fundamental basis of the control or orchestration mechanisms. We call this "PCE-Based Central Control," and the architecture is common across a very wide range of networking applications.

The speaker will present this very simple concept and demonstrate how powerful it is by describing some of the more interesting use cases. He will also touch briefly on the protocol extensions that will be needed to realise this solution.


back to program ^

 

Big Data - Why Bigger Does Not Mean Better:

Adrian Farrel, Old Dog Consulting

TBD

back to program ^

 

Introduction of SDN Technologies into Datacenter Network and Comparison

Hiroshige Tanaka, NTT Communications

TBD

back to program ^

 

 

Remote control experiments of an industrial robot using two distributed robot controllers

Takehiro Sato, Keio University

 

Cloud robotics [1] is expected to enable efficient and low-cost production operations in the manufacturing industry by exploiting computational and network resources in the cloud. By leveraging the Robot Operating System (ROS) framework [2] we pursue a geographically-unconstrained distribution over two distinct sites of the key control functions of an industrial robot performing a surface blending task. The concurrent use of Software Defined Transport Network (SDTN) and edge computing technologies in support of this goal were demonstrated at the 12th International Conference on IP + Optical Network (iPOP 2016) [3], a sister conference of SDN/MPLS. This presentation will show the advantage of distributing the robot key control functions across two controllers located at distinct sites (one in Japan and one in the US). The robot’s task is to perform precise blending of a metal surface. Fig. 1 shows the experimental system. The main robot controller is located in Japan. A robot sub-controller, along with a visualization software tool, a robot driver, and a robot arm are located in the US. The two sites are connected by a VPN tunnel. The robot operator can control the robot through the visualization software tool. Three functions are required to perform the surface blending task; surface detection, process path planner, and free motion planner. The surface detection function processes raw scanned data obtained from a 3D sensor attached to the robot arm with the goal of detecting every surface available on the workbench. The visualization software tool presents the detected surfaces to the operator, who in turn selects the surface that needs to be worked on. After that, the process path planner computes the path that the blending tool has to follow in order to perform the required blending task. Finally, the free motion planner determines the movement of the robot arm that will carry out the blending procedure. The optimal distribution of the three functions between the robot controllers depends on both the imposed constraints (e.g., robot arm and operator must be in the US, process path planner must be in Japan) and the volume of the data that must be exchanged by these functions. Fig. 2 (a) and (b) show the traffic data rate recorded at the main controller when the surface detection function is executed by either the main controller or the sub-controller, respectively. The data volume exchanged between Japan and the US is drastically reduced in the latter case as the raw data from the 3D sensor is processed by the sub-controller in the US and does not need to be sent overseas. In this presentation, we will review results obtained from a number of experimental scenarios with the objective of minimizing the completion time of the robot task.


back to program ^

 

Carrier Grade SDN in Real World Networks Challenges and Opportunities

Nic Leymann, Thomas Beckhaus, Deutsche Telekom AG

With the rollout of the BNG architecture in Germany, Deutsche Telekom moves to highly scalable network architecture, being the basis for the migration towards an All IP network and for all future services for residential and business customers. This architecture is based on distributed service nodes and centralized data centers, implementing a wide variety of different services. Carrier networks bring a lot of challenges when implementing SDN and NFV in a carrier grade scale (dynamics, scalability, ...)

The presentation will cover the following topics:

Architecture Overview: Description of the BNG architecture, including provisioning model. Motivation for moving to a common platform for residential and business services.
Challenges/Requirements: Current challenges for service providers in a fast changing environment and requirements for architecture evolutions to integrate SDN/NFV in carrier scale networks (e.g. network stability and dynamics, service placement, flexibility, limitations of current provisioning models, ...). Impact of SDN to carrier networks and use cases.
Architecture Options: Description of architectural options for extending the BNG architecture to integrate SDN approaches and to facilitate data center based services. Implications of “programmable carrier networks” for network operations; comparison of suitable southbound protocols for carriers

back to program ^

 

On the Dialectics of Intent

Diego Lopez, Telefonica

Intent-based networking is an extremely promising approach for the application interface of network infrastructures, especially tailored to be applicable to Software Networks whether SDN, NFV or any combination of them. Beyond this, intent belongs to a particular kind of policy expressions that requires a reconciliation with other policies applicable to network services, and that requires the application of mechanisms that we can consider “dialectical”, able to produce a synthesis of the potentially conflicting policy expressions. In addition, it is necessary to find a common path among the different potential approaches, especially among the different flavors Software Networks can take in their combination of SDN control and NFV orchestration, and the application of recursive frameworks. The talk will highlight these issues and show how Machine Learning techniques can be used to address convergence, showing some of the initial results of the COGNET project.


back to program ^

 

Policy Management

John Strassner, Huawei

The creation of a policy-based management system is complex, due to the variety and number of different actors using different concepts and languages to manage different types of devices. This paper describes a model-driven approach that provides an extensible and flexible representation of imperative, declarative, and other types of policies, along with the resources and services that they manage. A set of domain-specific languages use this model-driven representation to offer different programming abstractions that are optimized for different actors.

back to program ^

 

 

Modelling Services – An Operator's Perspective

Bin Wen, Comcast

The Software Defined Network (SDN) architecture provides a well-known mechanism for controlling network devices and operating networks. The Controller talks southbound to routers and switches and is managed by a Network Orchestrator that understands how the network resources can best be used. In many realizations of this architecture, these components communicate using data models expressed in YANG.

However, very little attention has been paid to how the Network Orchestrator understands what it needs to do to with the network. What is the purpose for which it coordinates the network resources? What are the operator's objectives for the network? What services is the operator trying to deliver?

This presentation will look at network orchestration from an operator's perspective to show the how the SDN model can be extended via a component called a Service Orchestrator as far north as the Service Provider's interface with the end-user and enterprise customers. The Service Orchestrator consumes requests for services expressed in a standardized form using YANG, applies operator policies, and generates instructions to the Network Orchestrator. Customers can compare service offerings from different operators and can write software tools and applications to automatically request and modify the services that they are supplied.

Using first-hand experience developing a service model for Layer Two VPNs, the presenter will explain some of the issues that arise in getting network operators to agree on a standard data model, and will explain how the Service Orchestrator could significantly simplify and enhance the way that an operator runs their network.
The main focus is to capture a list of service attributes at external interconnect interfaces to the end customer or partner operator. These parameters are typically part of the carrier’s production specification. The data model here is intended to provide an abstract service topology without the actual configuration of all the network elements in the underlying physical topology.

The Service Orchestrator may have southbound interface to instruct the Network Orchestrator to perform initial provisioning or MACD changes on the network elements using the proper encapsulation and/or encryption technology at the Operator’s choice.
The presentation will also look at existing work on service models (such as the IETF's Layer Three VPN Service Model working group) and consider what other service models might be appropriate for standardization.


        

back to program ^

 

Policy Interface for the Service Layer: Abstraction and Automation —>* Northbound interface from Security Policy Controller to express security policies

Rakesh Kumar, Juniper Networks

TBD

back to program ^

 

Blowing the Network Stack Up: Challenges and solutions in disaggregation

Russ White, LinkedIn

Software Defined Networks, disaggregation, and white box seem to go hand in hand--but in reality they are three different things. This session will consider disaggregation drivers, and how they overlap with and differ from the goals of SDN and white box. Challenges in the realm of disaggregation will be considered, as well, and potential market based and open source solutions to the challenges of disaggregation.


back to program ^



 

 

Solving the Complexity Puzzle: Where SDNs have helped, and where they haven't

Russ White, LinkedIn

Software Defined is supposed to make networks simpler to manage and operate by decoupling the control plane from the data plane, and centralizing the control plane while leaving the data plane distributed. This presentation will argue that centralization and decentralization are two extremes of a continuum, both ends of which lead to unnecessary ties between the various components of a control plane. Instead, breaking the control plane into separate parts, and placing each component where it most makes sense--a potential midpoint between the two centralize and decentralize everything approaches--will provide a realistic way forward to managing complexity.

 

back to program ^

 

Cloud- Native Environments and Networking/Policy

Chris Liljenstolpe, Tigera

There is a lot of talk about micro-services, cloud-native environments, containers, CI/CD and the like.  We will try and unpack them and review how this new application development and delivery environment actually works and how it is different from the current mode of operation.  We’ll look at how it might impact things such as NFV going forward and take a closer look at how policy and ‘intent’ will drive this environment and the network that brings it all toghether.

 

 

back to program ^

 

The ‘impedance mismatch’ between cloud-native and security’ and what to do about it

Chris Liljenstolpe, Tigera

TBD

 

back to program ^

 

The Transition to Big Data Network Analytics

Alex- Honthorne- Iwane, Kentik

Network data is big data. However, legacy approaches to network analytics have been so reductive that most of the value of infrastructure data is lost. Thankfully, big data approaches are now readily accessible and the potential of network traffic and performance analytics in particular is finally being unlocked. This talk will look at the key big data requirements for network traffic & performance analytics, available technologies and limitations, on-premises versus cloud-based delivery models, key analytics capabilities that big data enables, sample use cases, and how analytics plays an essential role in automated remediation.

back to program ^

 

 

Invited Talk

TBD, Verizon

TBD

back to program ^

 

Control Architecture for Cloud Networks

Joel Halpern, Ericsson

SDN and Cloud solutions demand a wide range of operational capabilities. In addition, for many operators, these deployments need to operate well with other aspects of the environment. This talk presents some approaches to integrating the many levels of abstraction in SDN environments, including approaches to including access capabilities such as radio into the orchestrated and optimized environment.

 

back to program ^

 

SDN and NFV

Joel Halpern, Ericsson

This tuorial will bring together a number of the disparate elements of the SDN and NFV landscape.  It will begin with a little bit of architectural framework.  This will be followed by an overview of the activities and relationships among them of a number of industry and standard bodies as well as some of the open source projects.  This will include some of the work of the ETSI NFV, as well as the ONF and IETF standardization activities and the OPNFV, ODL, and fd.io open source projects.  The tutorial will conclude by showing some of the ways these various activities can be brought together to address operational needs.

back to program ^

 

The End of the Router?

Hannes Gredler, Rtbrick

The [Lindy effect](https://en.wikipedia.org/wiki/Lindy_effect) describes a phenomenon that the longer a technology is around the longer it is expected to stay. For example the automobile as a concept is around since 100 years which is a good a predictor that automobiles will be around for another 100 years - albeit in slightly different designs (electric transmission, autonomous piloting). Since a decade, SDN concepts challenge the status quo of router-technology but yet have failed to completely replace it. The IP/MPLS router is here to stay. In this talk both the working and non-working parts of contemporary router designs will be highlighted and suggestions for alternatives will be discussed. The main design areas covered: - Data plane - on/off chip resources and questioning the need for large forwarding tables. - Tunneling Technology and the cost of bits - commodity vs. boutique forwarding - Control-plane - Microservices, data-base abstraction and dynamic code insertion - Premature Optimization and its impact to technical debt - Networking models vs. Protocols - Resiliency and dealing with failure # Authors Bio Hannes Gredler is the CTO of RtBrick Inc., where he is working on 3rd generation routing software targeted to open networking hardware. in his previous stint Hannes has been 15 years with Juniper Networks working on the design and implementation of the BGP, OSPF, IS-IS, MPLS routing and signaling protocols. Since 2013 Hannes serves as a co-chair of the IETF IS-IS working group. Hannes is the author of "IS-IS the complete reference" and holds 20+ patents in the space of IP/MPLS. Hannes has received his M.S. in “System Engineering, Manufacturing and Automation" from the Technical University of Graz, Austria. He is a proud father of four and an active Aikido practitioner.

back to program ^

 

Tutorial 3: Virtual Networking for Data Centers with OpenStack and Containers Jeff Lacouture, ali khayam, PLUMgrid

Jeff Lacouture, Ali Khayam, PLUMgrid

For operators exploring cloud-based solutions with OpenStack or containers such as Docker, Mesos, and Kubernetes, virtual networking is often a challenge between different cloud management models. This tutorial presents an overview of the virtual networking models including Neutron, OVS, IO Visor, CNI, CNM and examines the the control plane and data plane architecture. The talk will then focus on OpenStack deployment models with multi-tenancy, micro-segmentation, service insertion and chaining, virtual network functions, security policies, and monitoring.

back to program ^

 
 
 

© 2001-2016 ISOCORE CORP. ALL RIGHTS RESERVED | Sitewide Privacy Statement | Contact the Webmaster | Contact Info