Решения nfv в контексте операторов связи

57
Juniper Networks SDN and NFV products for Service Providers Networks Evgeny Bugakov Senior Systems Engineer, JNCIE-SP 21 April 2015 Moscow, Russia

Upload: termilab-

Post on 16-Jul-2015

371 views

Category:

Education


0 download

TRANSCRIPT

Page 1: Решения NFV в контексте операторов связи

Juniper Networks SDN and NFV products for Service Providers Networks

Evgeny Bugakov

Senior Systems Engineer, JNCIE-SP

21 April 2015

Moscow, Russia

Page 2: Решения NFV в контексте операторов связи

AGENDA

Virtualization strategy and goals1

vMX product overview and performance 2

vMX Roadmap and licensing4

vMX Use cases and deployment models3

Northstar WAN SDN Controller5

Page 3: Решения NFV в контексте операторов связи

Virtualization strategy and goals

Page 4: Решения NFV в контексте операторов связи

Branch

Office

HQ

Carrier Ethernet

Switch

Cell Site

Router

Mobile &

Packet GWs

Aggregation

Router/

Metro Core

DC/CO Edge

RouterService Edge

Router

Core

Enterprise Edge/Mobile Edge Aggregation/Metro/Metro coreService Provider Edge/Core

and EPC

VCPE, Enterprise Router Virtual PE, Hardware Virtualization

Virtual Route Reflector

MX SDN Gateway

Control Plane and OS: Virtual JUNOS, Forwarding Plane: Virtualized Trio

vPE, vCPE

Data center/Central Office

MX Virtualization Strategy

Leverage R&D effort and JUNOS feature velocity across all physical & virtualization initiatives

Page 5: Решения NFV в контексте операторов связи

Physical vs. Virtual

Physical Virtual

High throughput, high density Flexibility to reach higher scale in control plane and service plane

Guarantee of SLA Agile, quick to start

Low power consumption per throughput Low power consumption per control plan and service

Scale up Scale out

Higher entry cost in $ and longer time to deploy Lower entry cost in $ and shorter time to deploy

Distributed or centralized model Optimal in centralized cloud-centric deployment

Well development network mgmt system, OSS/BSS Same platform mgmt as Physical, plus same VM mgmt as a SW on server in the cloud

Variety of network interfaces for flexibility Cloud centric, Ethernet-only

Excellent price per throughput ratio Ability to apply “pay as you grow” model

Each option has its own strength, and it is created with different focus

Page 6: Решения NFV в контексте операторов связи

Type of deployments with virtual platform

Traditional function, 1:1

form replacement

New applications where physical is

not feasible or ideal

A whole new approach to a traditional

concept

Cloud CPE

Cloud based VPN

Service Chaining GW

Virtual Private Cloud GWMulti-function, multi-layer integration

w/ routing as a plug-in

SDN GW

Route Reflector

Services appliances

Lab & POC

Branch Router

DC GW

CPE

PE

Wireless LAN GW

Mobile Sec GW

Mobile GW

Page 7: Решения NFV в контексте операторов связи

vMX Product Overview

Page 8: Решения NFV в контексте операторов связи

vMX overviewEfficient separation of control and data-plane

– Data packets are switched within vTRIO

– Multi-threaded SMP implementation allows core elasticity

– Only control packets forwarded to JUNOS

– Feature parity with JUNOS (CLI, interface model, service configuration)

– NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)

Guest OS (Linux) Guest OS (JUNOS)

Hypervisor

x86 Hardware

CH

ASS

ISD

RP

D

LC-

Ke

rne

l

DC

D

SNM

P

Virtual TRIO

VFP VCP

Intel DPDK

Page 9: Решения NFV в контексте операторов связи

Virtual and Physical MX

PFE VFP

Microcode

crosscompiled

X86

instructions

CONTROL

PLANE

DATA

PLANE

ASIC/HARD

WARE

Cross compilation creates high leverage of features between Virtual and Physical with minimal re-work

TRIO

UCODE

Page 10: Решения NFV в контексте операторов связи

Virtualization techniques: deployment with hypervisors

Application

Virtual NICs

Physical NICs

Guest VM#1

Hypervisor: KVM, XEN,VMWare ESXi

Physical layer

VirtIO drivers

Device emulation

Para-virtualization (VirtIO, VMXNET3)

• Guest and Hypervisor work together to make emulation efficient

• Offers flexibility for multi-tenancy but with lower I/O performance

• NIC resource is not tied to any one application and can be shared across multiple applications

• vMotion like functionality possible

PCI-Pass through with SR-IOV

• Device drivers exist in user space• Best for I/O performance but has dependency on NIC type• Direct I/O path between NIC and user-space application

bypassing hypervisor• vMotion like functionality not possible

Application

Virtual NICs

Guest VM#2

VirtIO drivers

Application

Virtual NICs

Physical NICs

Guest VM#1

Hypervisor: KVM, XEN, VMWare ESXi

Physical layer

Device emulation

Application

Virtual NICs

Guest VM#2

Device emulation

PC

I Pas

s-th

rou

gh

SR-I

OV

Page 11: Решения NFV в контексте операторов связи

Virtualization techniques: containers deployment

Application 1

Virtual NICs

Physical NICs

Physical layer

Containers (Docker, LXC)

• No hypervisor layer. Much less memory and compute resource overhead

• No need for PCI-pass through or special NIC emulation • Offers high I/O performance• Offers flexibility for multi-tenancy

Application 2

Virtual NICs

Container engine (Docker, LXC)

Page 12: Решения NFV в контексте операторов связи

Virtual TRIO Packet Flow

Physical nics

Virtual nics

DPDK

br-int

172.16.0.3

vpfe0 eth0 : 172.16.0.2

fxp0: <any address>

vre0

rpd chasd

VMXT = microkernel

vTRIO

br-ext

<any address>

eth1 : <any address>

em1: 172.16.0.1

vpfe1

vre1

VCP

VFP

Page 13: Решения NFV в контексте операторов связи

vMX Performance

Page 14: Решения NFV в контексте операторов связи

vMX Environment

Description Value

Sample system configurationIntel Xeon E5-2667 v2 @ 3.30GHz 25 MB Cache.

NIC: Intel 82599 (for SR-IOV only)

MemoryMinimum: 8 GB

(2GB for vRE, 4GB for vPFE, 2GB for Host OS)

Storage Local or NAS

Sample system configuration

Sample configuration for number of CPUs

Use-cases Requirement

VMX with up to 100Mbps performanceMin # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].

Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G. VirtIO NIC only.

VMX with up 3G of performance @ 512 bytesMin # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].

Min # of Cores: 4 [ 2 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G. VirtIO or SR-IOV NIC.

VMX with 10G and beyond (assuming min 2 ports of 10G)Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP].

Min # of Cores: 5 [ 3 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G. SR-IOV only NIC.

Page 15: Решения NFV в контексте операторов связи

vMX Baseline Performance VMX performance in Gbps

# of cores for packet processing *

Frame size (Bytes) 3 4 6 8 10

256 2 3.8 7.2 9.3 12.6

512 3.7 7.3 13.5 18.4 19.8

1500 10.7 20 20 20 20

2 x 10G ports

4 x 10G ports

# of cores for packet processing*

Frame size (Bytes) 3 4 6 8 10

256 2.1 4.2 6.8 9.6 13.3

512 4.0 7.9 13.8 18.6 26

1500 11.3 22.5 39.1 40 40

6 x 10G ports

# of cores for packet processing*

Frame size (Bytes) 3 4 6 8 10

256 2.2 4.0 6.8 9.8

512 4.1 8.1 14 19.0 27.5

1500 11.5 22.9 40 53.2 60

*Number of cores includes cores for packet processing and associated host functionality. For each 10G port there is a dedicated core not included in this number.

8 x 10G ports

# of cores for packet processing*

Frame size (Bytes) 3 4 6 8 12

66 4.8

128 8.3

256 14.4

512 31

1500 78.5

IMIX 35.3

Page 16: Решения NFV в контексте операторов связи

vMX use cases and deployment models

Page 17: Решения NFV в контексте операторов связи

Service Provider VMX use case – virtual PE (vPE)

DC/COGateway

ProviderMPLScloudCPE

L2PE

L3PE

CPE

Peering

Internet

SMBCPE

Pseudowire

L3VPN

IPSEC/Overlaytechnology

BranchOffic

e

BranchOffic

e

DC/COFabric

vPE

• Scale-out deployment

scenarios

• Low bandwidth, high control

plane scale customers

• Dedicated PE for new

services and faster time-to-

market

Market Requirement

• VMX is a virtual extension of

a physical MX PE

• Orchestration and

management capabilities

inherent to any virtualized

application apply

VMX Value Proposition

Page 18: Решения NFV в контексте операторов связи

VMX as a DC Gateway – virtual USGW

VM VM VM

ToR(IP)

ToR(L2)

NonVirtualizedenvironment(L2)

VXLANGateway(VTEP)

VTEP

VM VM VM

VTEP

VirtualizedServer VirtualizedServer

VPNCustA VPNCustB

VRFA

VRFB

MPLSCloud

VPNGateway(L3VPN)

VMX

VirtualNetworkB VirtualNetworkA

VM VM VM VM VM VM

DataCenter/CentralOffice

• Service Providers need a

gateway router to connect the

virtual networks to the physical

network

• Gateway should be capable of

supporting different DC overlay,

DC Interconnect and L2

technologies in the DC such as

GRE, VXLAN, VPLS and EVPN

Market Requirement

• VMX supports all the overlay, DCI and

L2 technologies available on MX

• Scale-out control plane to scale up

VRF instances and number of VPN

routes

VMX Value Proposition

Page 19: Решения NFV в контексте операторов связи

Reflection from physical to virtual world Proof of concept lab validation or SW certification

• Perfect mirroring effect between carrier grade physical platform & virtual router

• Can provide reflection effect of an actual deployment in virtual environment

• Ideal to support• Proof of Concept lab

• New service configuration/operation preparation

• SW release validation for an actual deployment

• Training lab for operational team

• Troubleshoot environment for a real network issue

• CAPEX or OPEX reduction for lab

• Quick turn around when lab network scale is required

Virtual

Physical deployment

Page 20: Решения NFV в контексте операторов связи

Virtual BNG cluster in a data center

BNG cluster

10K~100K subscribers

Data Center or CO

vMX as vBNG

vMX vMX vMX vMX vMX

• Potentially BNG function can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS);• Suitable to perform heavy load BNG control-plane work while there is little BW needed; • Pay-as-you-grow model;• Rapid Deployment of new BNG router when needed;• Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;

Page 21: Решения NFV в контексте операторов связи

vMX Route Reflector feature set

Route Reflectors are characterized by RIB scale (available memory) and BGP Performance (Policy Computation, route resolver, network I/O - determined by CPU speed)

Memory drives route reflector scaling

• Larger memory means that RRs can hold more RIB routes

• With higher memory an RR can control larger network segments – lower number of RRs required in a network

CPU speed drives faster BGP performance

• Faster CPU clock means faster convergence

• Faster RR CPUs allow larger network segments controlled by one RR - lower numbers of RRs required in a network

vRR product addresses these pain point by running Junos image as an RR application on faster CPUs and with memory on standard servers/appliances

Page 22: Решения NFV в контексте операторов связи

VRR Scaling Results

* The convergence numbers also improve with higher clock CPU

Tested with 32G vRR instance

Address Family

# of advertizing

peersactive routes Total Routes

Memory Utilization(for

receive all routes)

Time takento receive all

routes

# of receiving peers

Time taken to advertisethe routes and Mem Utils.

IPv4 600 4.2 million 42Mil (10path) 60% 11min 600 20min(62%)

IPv4 600 2 million 20Mil (10path) 33% 6min 600 6min(33%)

IPv6 600 4 million 40Mil (10path) 68% 26min 600 26min(68%)

VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%)

VPNv4 600 4.2Mil8.4Mil

(2 paths )19% 5min 600 23min(24%)

VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%)

VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%)

VPNv6 600 4.2Mil8.4Mil

(2 paths )22% 8min 600 8min(22%)

Page 23: Решения NFV в контексте операторов связи

CLOUD Based Virtual Route Reflector DESIGNSolving the best path selection problem for cloud virtual route reflector

VRR 1

Region 1

Regional

Network 2

VRR 2

Region 2Data Center

CloudBackbone

GRE, IGP

VRR 2 selects path based on R1 view

R1

R2VRR 2 selects path based on R2 view

vRR as an “Application” hosted in DC

GRE tunnel is originated from gre.X (control plane interface)

VRR behaves like it is locally attached to R1 (requires resolution RIB config)

Client 2

Client 1Regional

Network 1

Client 3

iBGP

Cloud Overlay w/ Contrail or VMWare

Page 24: Решения NFV в контексте операторов связи

VMX to offer managed CPE/centralized CPE

vMXasvCPE(IPSec,NAT)

vSRX(Firewall)

BranchOffic

e

Switch

ProviderMPLScloud

DC/COGW

BranchOffic

e

Switch

ProviderMPLScloud

DC/COFabric+Contrailoverlay

vMXasvPE

BranchOffic

e

Switch

L2PE

L2PE

PE

InternetContrail

Controller

Service providers want to offer a managed CPE service and centralize the CPE functionality to avoid “truck rolls” Large enterprises want a centralized CPE offering to manage all their branch sites Both SPs and enterprises want the ability to offer new services without changing the CPE device

Market Requirement

VMX with service chaining can offer best of breed routing and L4-L7 functionality Service chaining offers the flexibility to add new services in a scale-out manner

VMX Value Proposition

Page 25: Решения NFV в контексте операторов связи

Cloud Based CPE with vMX

• A Simplified CPE

• Remove CPE barriers to service innovation

• Lower complexity & cost

DHCPFirewallRouting / IP

ForwardingNAT

Modem / ONTSwitchAccess

Point

VoiceMoCA/ HPAV/ HPNA3

Typical CPE Functions

DHCP

FWRouting / IP

Forwarding

NATModem / ONTSwitchAccess

Point

VoiceMoCA/ HPAV/ HPNA3

Simplified L2 CPE

In Network CPE functions

Leverage & integrate with other network services

Centralize & consolidate

Seamless integrate with mobile & cloud based services

Direct Connect

Extend reach & visibility into the home

Per device awareness & state

Simplified user experience

Simplify the device required on the customer premise

Centralize key CPE functions & integrate them into the network edge

BNG / PE in SP Network

Page 26: Решения NFV в контексте операторов связи

More use cases? The limit is our imagination

• Virtual platform is one more tool for network provider, and the use cases are up to users to define

VPC GW for private, public and hybrid cloud

Virtual Route Reflector

NFV plug-in for multi-function consolidation

SW certification, lab validation, network planning & troubleshooting, proof of concept

Distributed NFV Service Complex

Virtual BNG cluster

Virtual Mobile service control GW

And more…

Cloud based VPN

vGW for service chaining

Page 27: Решения NFV в контексте операторов связи

vMX FRS features

Page 28: Решения NFV в контексте операторов связи

vMX Products family

Characteristics Target customer Availability

Trial• Up to 90 day trial• No limit on capacity

• Inclusive of all features

• Potential customers who want to try-out VMX in their lab or

qualify VMX

• Early availability by end of Feb 2015

Labsimulation/Educ

ation

• No time-limit enforced• Forwarding plane limited to

50Mbps• Inclusive of all features

• Customer wants to simulate production network in lab

• New customer to gain JUNOS and MX experience

• Early availability by end of Feb 2015

GA product

• Bandwidth driven licenses• Two modes for features:

BASE or ADVANCE/PREMIUM

• Production deployment for VMX • 14.1R6 (June 2015)

Page 29: Решения NFV в контексте операторов связи

VMX FRS product

• Official FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R6. • High level overview of FRS product

• DPDK integration. Min 80G throughput per VMX instance.• OpenStack integration.• 1:1 mapping between VFP and VCP• Hypervisor support: KVM, VMWare ESXi, Xen• High level feature support for FRS

• Full IP capabilities• MPLS: LDP, RSVP• MPLS applications: L3VPN, L2VPN, L2Circuit• IP and MPLS multicast• Tunneling: GRE, LT• OAM: BFD• QoS: Intel DPDK QoS feature-set

Page 30: Решения NFV в контексте операторов связи

vMX Roadmap

Page 31: Решения NFV в контексте операторов связи

vMX with vRouter and Orchestration

Contrail controller

NFV orchestrator

Template based config

• vMX with vRouter integration

• VirtIO utilized for Para-virtualized drivers

• Contrail OpenStack for

• VM management

• Setting up overlay network

• NFV Orchestrator (OpenStack Heat templates)

utilized to easily create and replicate VMX

instances

Page 32: Решения NFV в контексте операторов связи

vMX Licensing

Page 33: Решения NFV в контексте операторов связи

vMX Pricing philosophy

Value based pricing

Elastic pricing model

• Price as a platform and not just on cost of bandwidth• Each VMX instance is a router with its own control-plane,

data-plane and administrative domain• The value lies in the ability to instantiate routers easily

• Bandwidth based pricing• Pay as you grow model

Page 34: Решения NFV в контексте операторов связи

Application package functionality mapping

Application package Functionality Use cases

BASE • IP routing with 32K IP routes in FIB• Basic L2 functionality: L2 Bridging and

switching• No VPN capabilities: No L2VPN, VPLS,

EVPN and L3VPN

• Low end CPE or Layer3 Gateway

ADVANCED (-IR) • Full IP FIB• Full L2 capabilities includes L2VPN,

VPLS, L2Circuit• VXLAN• EVPN• IP Multicast

• L2vPE• Full IP vPE• Virtual DC GW

PREMIUM (-R) • BASE• L3VPN for IP and Multicast

• L3VPN vPE• Virtual Private Cloud

GW

Note: Application packages exclude IPSec, BNG and VRR functionality.

Page 35: Решения NFV в контексте операторов связи

Bandwidth License SKUs

• Bandwidth based licenses for each application package for the following processing capacity limits: 100M, 250M, 500M, 1G, 5G, 10G, 40G. Note for 100M, 250M and 500M there is a combined SKU with all applications included.

100M 250M 500M

1G BASE

1G ADV

1G PRM

5G BASE

5G ADV

5G PRM

10G BASE

10G ADV

10G PRM

40G BASE

40G ADV

40G PRM

BASE

ADVANCE

PREMIUM

• Application tiers are additive i.e ADV tier encompasses BASE functionality

Page 36: Решения NFV в контексте операторов связи

VMX software License SKUs

SKU Description

VMX-100M 100M perpetual license. Includes all features in full scale

VMX-250M 250M perpetual license. Includes all features in full scale

VMX-500M 500M perpetual license. Includes all features in full scale

VMX-BASE-1G 1G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features

VMX-BASE-5G 5G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features

VMX-BASE-10G 10G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features

VMX-BASE-40G 40G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features

VMX-ADV-1G 1G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-ADV-5G 5G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-ADV-10G 10G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-ADV-40G 40G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances

VMX-PRM-1G 1G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

VMX-PRM-5G 5G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

VMX-PRM-10G 10G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

VMX-PRM-40G 40G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.

Page 37: Решения NFV в контексте операторов связи

Juniper NorthStar Controller

Page 38: Решения NFV в контексте операторов связи

CHALLENGES WITH CURRENT NETWORKSHow to Make the Best Use of the Installed Infrastructure?

2

3

1? How do I use my network resources efficiently?

1? How can I make my network application aware?

1? How do I get complete & real-time visibility?

Page 39: Решения NFV в контексте операторов связи

PCE ARCHITECTUREA Standards-based Approach for Carrier SDN

Path Computation Element (PCE): Computes

the path

Path computation Client (PCC): Receives the

path and applies it in the network. Paths are

still signaled with RSVP-TE.

PCE protocol (PCEP): Protocol for PCE/PCC

communicationPCEP

PCCPCC

PCC

A path Computation Element (PCE) is a system

component, application, or network node that is

capable of determining and finding a suitable

route for conveying data between a source and

a destination

What are the components?What is it?

PCE

Page 40: Решения NFV в контексте операторов связи

ACTIVE STATEFUL PCEA centralized network controller

The original PCE drafts (of the mid-2000s) were mainly focused

around passive stateless PCE architectures:

More recently, there’s a need for a more ‘Active’ and ‘Stateful’ PCE

NorthStar is an active stateful PCE

This fits well to the SDN paradigm of a centralized network controller

What makes an active Stateful PCE different:

The PCE is synchronized, in real-time, with the network via standard

networking protocols; IGP, PCEP

The PCE has visibility into the network state; b/w availability, LSP attributes

The PCE can take ‘control’ and create ‘state’ within the MPLS network

The PCE dictates the order of operations network-wide.

Report LSP state Create LSP state

NorthStar

MPLS Network

Page 41: Решения NFV в контексте операторов связи

SOFTWARE-DRIVEN POLICY

Topology Discovery Path Computation State Installation

NORTHSTAR COMPONENTS & WORKFLOW

PCEP TE LSP discovery

IGP-TE, BGP-LS TED discovery (BGP-LS, IGP)

LSDB discovery (OSPF, ISIS)

PCEP Create/Modify TE LSP

One session per LER(PCC)

ANALYZE OPTIMIZE VIRTUALIZE

Routing PCEPApplication Specific Alg’s

RSVP signaling

OPEN

APIs

Page 42: Решения NFV в контексте операторов связи

NORTHSTAR MAJOR COMPONENTS

NorthStar consists of several major components:

JUNOS Virtual Machine (VM)

Path Computation Server (PCS)

Topology Server

REST Server

Component functional responsibilities:

The JUNOS VM, is used to collect the TE-database & LSDB

A new JUNOS daemon, NTAD, is used remote ‘flash’ the lsdist0

table to the PCS

The PCS has multiple functions:

Peers with each PCC using PCEP for LSP state collection &

modification

Runs application specific Algs for computing LSP paths

The REST server is the interface into the APIs

PCEJUNOS VM

NTAD

RPD

PCS

REST_Server

KVM Hypervisor

Centos 6.5

MPLS NetworkPCC

BGP-LS/IGP PCEP

Topo_Server

Page 43: Решения NFV в контексте операторов связи

Standard, custom, & 3rd party Applications

Topology Discovery Path Computation Path Installation

Topology API Path computation API Path provisioning API

PCEP PCEPApplication specific algorithmsIGP-TE / BGP-LS

REST REST REST

NorthStar pre-packaged applications Bandwidth Calendaring, Path Diversity, Premium

path, auto-bandwidth / TE++, etc…

NORTHSTAR NORTHBOUND APIIntegration with 3rd Party Tools and Custom Applications

Page 44: Решения NFV в контексте операторов связи

NORTHSTAR 1.0 HIGH AVAILABILITY (HA)Active / Standby for delegated LSPs

NorthStar 1.0 supports a high availability model only for

delegated LSPs:

Controllers are not actively synced with each-other

Active / standby PCE model with up to 16 back-up

controllers:

PCE-group: All PCE belonging to the same group

LSPs are delegated to the primary PCE

Primary PCE is the controller with the highest delegation priority

Other controllers cannot make changes to the LSPs

If a PCC looses connection with its primary PCE, it will immediately

use the PCE with next highest delegation priority as its new

primary PCE

ALL PCCs MUST use the same primary PCE

[configuration protocols pcep]

pce-group pce {

pce-type active stateful;

lsp-provisioning;

delegation-cleanup-timeout 600;

}

pce jnc1 {

pce-group pce;

delegation-priority 100;

}

pce jnc2 {

pce-group pce;

delegation-priority 50;

jnc1 jnc2

PCC

PCEPPCEP

Page 45: Решения NFV в контексте операторов связи

JUNOS PCE CLIENT IMPLEMENTATION New JUNOS daemon, pccd

Enables a PCE application to set parameters for a traditionally configured TE LSPs and create ephemeral LSPs

PCCD is the relay/message translator between the PCE & RPD

LSP parameters, such as the path & bandwidth, & LSP creation instructions are received from the PCE are communicated to RPD via PCCD

RPD then signals the LSP using RSVP-TE

PCE

PCEP

PCCD

PCEPRPD

MPLS Network

PCEP

JUNOS IPC

RSVP-TE

Page 46: Решения NFV в контексте операторов связи

Topology Discovery MPLS capacity planning‘Full’ Offline Network

Planning

NorthStar NorthStar Simulation IP/MPLSview

LSP Control/Modification FCAPs (PM, CM, FM)Exhaustive Failure Analysis

REAL-TIME NETWORK FUNCTIONS

Dynamic Topology updates via

BGP-LS / IGP-TE

Dynamic LSP state updates via

PCEP

Real-time modification of LSP

attributes via PCEP (ERO, B/W,

pre-emption, …)

MPLS LSP PLANNING & DESIGN

Topology acquisition via

NorthStar REST API (snapshot)

LSP provisioning via REST API

Exhaustive failure analysis &

capacity planning for MPLS LSPs

MPLS LSP design (P2MP, FRR,

JUNOS config’let, …)

OFFLINE NETWORK PLANNING & MANAGEMENT

Topology acquisition &

equipment discovery via CLI,

SNMP, NorthStar REST API

Exhaustive failure analysis &

capacity planning (IP & MPLS)

Inventory, provisioning, &

performance management

NORTHSTAR SIMULATION MODENorthStar vs. IP/MPLSview

Page 47: Решения NFV в контексте операторов связи

DIVERSE PATH COMPUTATIONAutomated Computation of end-to-end diverse paths

Network-wide visibility allows NorthStar to support end-to-end LSP path diversity:

Wholly disjoint path computations; Options for link, node and SRLG diversity

Pair of diverse LSPs with the same end-points or with different end-points

SRLG information learned from the IGP dynamically

Supported for PCE created LSPs(at time of provisioning) and delegated LSPs(though manual

creation of diversity group)

Warning!Shared Risk Shared Risk

Eliminated

Primary LinkSecondary Link

CE

CE

CE

CE

NorthStar

Page 48: Решения NFV в контексте операторов связи

PCE CREATED SYMMETRIC LSPSLocal association of LSP symmetry constraint

Symmetric LSPs

NorthStar

NorthStar supports creating symmetric LSPs:

Does not leverage GMPLS extensions for co-routed or associated bi-directional LSPs

Unidirectional LSPs (identical names) are created from nodeA to nodeZ & nodeZ to nodeA

Symmetry constraint is maintained locally on NorthStar (attribute: pair =<value>)

Symmetric LSP creation

Page 49: Решения NFV в контексте операторов связи

MAINTENANCE-MODE RE-ROUTINGAutomated Path Re-computation, Re-signaling and Restoration

Automate re-routing of traffic before a scheduled maintenance window:

Simplifies planning and preparation before and during a maintenance window

Eliminate the risk that traffic is mistakenly affected when a node / link goes into maintenance mode

Reduced need for spare capacity through the optimum use of resources available during the

maintenance window

After the maintenance window finished paths are automatically restored to the (new) optimum path

1

Maintenance mode tagged: LSP paths are re-computed assuming

affected resources are not available

XX

X

2

In maintenance mode: LSP paths are automatically

(make-before-break)re-signaled

3

Maintenance mode removed: all LSP paths are re-stored to their

(new) optimal path

NorthStar

Page 50: Решения NFV в контексте операторов связи

GLOBAL CONCURRENT OPTIMIZATION Optimized LSP placement

NorthStar enhances traffic engineering through LSP placement based on a network

wide visibility of the topology and LSP parameters:

CSPF ordering can be user-defined, i.e. the operator can select which parameters such as LSP priority

and LSP bandwidth influence the order of placement

Net Groom:

- Triggered on demand

- User can choose LSPs to be optimized

- LSP priority is not taken into account

- No pre-emption

Path Optimization:

- Triggered on demand or on scheduled

intervals (with optimization timer)

- Global re-optimization toward all LSPs

- LSP priority is taken into account

- Preemption may happen

High priority LSPLow priority LSP

Global re-optimization

NorthStarBandwidth bottleneck!

CSPF failure

New Path request

Page 51: Решения NFV в контексте операторов связи

INTER-DOMAIN TRAFFIC-ENGINEERINGOptimal Path Computation & LSP Placement

LSP [delegation, creation, optimization] of inter-domain LSPs

Single active PCE across domains, BGP-LS for topology acquisition

JUNOS Inter-AS requirements & constraints

http://www.juniper.net/techpubs/en_US/junos13.3/topics/usage-guidelines/mpls-enabling-inter-as-traffic-engineering-for-

lsps.html

Inter-AS Traffic-Engineering

NorthStar

NorthStar

Inter-Area Traffic-Engineering

AS 100

AS 200

Area 1

Area 2

Area 3Area 0

Page 52: Решения NFV в контексте операторов связи

NORTHSTAR SIMULATION MODEOffline Network Planning & Modeling

NorthStar builds a near real-time network model for visualization and off-line planning through

dynamic topology / LSP acquisition:

Export of topology and LSP state to NorthStar simulation mode for ‘off-line’ MPLS network modeling

Add/delete links/nodes/LSPs for future network planning

Exhaustive failure analysis, P2MP LSP design/planning, LSP design/planning, FRR design/planning

JUNOS LSP config’let generation

NorthStar-Simulation

Year 1

Year 3

Year 5

ExtensionYear 1

Page 53: Решения NFV в контексте операторов связи

A REAL CUSTOMER EXAMPLE – PCE VALUECentralized vs. distributed path computation

Lin

k U

tiliz

atio

n (

%)

0,00%

20,00%

40,00%

60,00%

80,00%

100,00%

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172

Distributed CSPF PCE centralized CSPF

TE-LSP operational routes are used for distributed CSPF

RSVP-TE Max Reservable BW set BW set to 92%

Modeling was performed with the exact operation LSP paths

Convert all TE-LSPS to EROs via PCE design action

Objective function is Min Max link utilizations

Only Primary EROS & Online Bypass LSPS Modeling was performed with 100% of

TE LSPS being computed by PCE

Up to 15% reduction in RSVP reserved B/W

Distributed CSPF Assumptions Centralized Path Calculation Assumptions

Page 54: Решения NFV в контексте операторов связи

NORTHSTAR 1.0 FRS delivery

NorthStar FRS is targeted for March-23rd:

(Beta) trials / evaluations already ongoing

First customer wins in place

Target JUNOS releases:

14.2R3 Special *

14.2R4* / 15.1R1* / 15.2R1*

Supported platforms at FRS:

PTX (3K, 5K),

MX (80, 104, 240/480/960, 2010/2020, vMX)

Additional platform support in NorthStar 2.0

* Pending TRD Process

NorthStar packaging & platform:

Bare metal application only

No VM support at FRS

Runs on any x86 64bit machine that is supported by Red Hat 6 or Centos 6

Single hybrid ISO for installation

Based on Juniper SCL 6.5R3.0

Recommended minimum hardware

requirements:

64-bit dual x86 processor or dual 1.8GHz Intel

Xeon E5 family equivalent

32 GB RAM

1TB storage

2 x 1G/10G network interface

Page 55: Решения NFV в контексте операторов связи

Questions?

Page 56: Решения NFV в контексте операторов связи

How to get more?

• Join us at Facebook page: Juniper.CIS.SE (Juniper techpubs ru)

Page 57: Решения NFV в контексте операторов связи

Thank You!