optical trends in the data center

58
Optical Trends in the Data Center Doug Coleman Manager, Technology & Standards Distinguished Associate Corning Cable Systems

Upload: phamdan

Post on 21-Dec-2016

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Optical Trends in the Data Center

Optical Trends in the Data Center

Doug ColemanManager, Technology & Standards

Distinguished AssociateCorning Cable Systems

Page 2: Optical Trends in the Data Center

Data Center Environment

• Higher speeds• Higher density• Higher reliability• Lower capex • Lower opex• Green

Page 3: Optical Trends in the Data Center

VCSELs Drives MMF Value Proposition

• 850 nm VCSEL– Highly Efficient Manufacturing

Process (>100,000 / wafer)– Ease of Packaging into

Transceiver TOSA• Lowest Transceiver Price

– 10G Serial x Serial• 2:1 ($SM/$MM)

– 40G Serial x Parallel• 5:1 ($SM/$MM)

– 100G Serial x Parallel• 30:1 ($SM/$MM)

Vertical Cavity Surface Emitting Laser

Page 4: Optical Trends in the Data Center

Data Center Environment

Source: Corning Cable Systems

Page 5: Optical Trends in the Data Center

Data Center Multimode Cable Channel Distribution

Trunk LengthProduct Manufactured 2009-2011

50% 1 Trunk, 40% 2nd Trunk, 10% 3rd Trunk

0

200

400

600

800

1000

1200

1400

1600

1800

2000

10 20 30 40 50 60 70 80 90 100

110

120

130

140

150

160

170

180

190

200

210

220

230

240

250

More

Length (m)

Freq

uenc

y

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Frequency Cumulative %

Average = 54.2 m

100m: 88%

Source: Corning Cable Systems

Page 6: Optical Trends in the Data Center

Standard Specified Distances

850 nm Ethernet Distance (m)

1G 10G 40G 100G

OM3 1100 300 100 100

OM4 1100 400 / 550* 150 150

850 nm Fibre Channel Distance (m)

4G 8G 16G

OM3 380 150 100

OM4 480 190 125

*Engineered Length

Page 7: Optical Trends in the Data Center

Migration to OM4 …..

Finisar 32G presentation slide at the T11.2 Fibre Channel meeting (06/2010)

TIA-942A recommends OM4

Fibre Channel (32G) and Ethernet (100G) utilized OM4 to define distance objectives…..

Transmission and Cable Standards recommend OM4

Page 8: Optical Trends in the Data Center

Data Center Trends: Electronics and Connectivity

Optical Electronics and Connectivity Focus

Low Cost Low Power High Density

Increase data rates, transceiver size reduction, server consolidation/virtualization, multi-core processors, line card density, embedded optics, cloud computing, increased interconnect density

Page 9: Optical Trends in the Data Center

Evolution of VSR Computing Optical Interconnects

Source: Finisar 2011

Page 10: Optical Trends in the Data Center

Market Trends Toward 100GE

• Embedded optics adoption in data center – Reduction of signal trace length at higher data rates requires close proximity of

optics with ASIC– Enabling 5x – 10x denser I/O than edge mounted pluggable optics

Enabling Dense I/O for Data Centers

Source: Avago

Page 11: Optical Trends in the Data Center

MMF Transceivers Trend

• 850 nm MM VCSEL – Pluggable Transceivers• Continue dominating the DC demand from 10GE

to 40GE and to 100GE due to power, density and cost

• 850 nm MM VCSEL – Embedded Optics • Getting more adoption in DC and HPC especially

when moving to 25G per lane

Page 12: Optical Trends in the Data Center

1310 nm Silicon Photonics

• Optical interconnect boasts peak transfer rates of 1.6Tb/s

• 32 to 64-fiber connector

• 1310 nm MMF to 300 m with performance of 25Gb/s

• High BW SiP links server CPUs to storage units within the rack

• Allows servers to be replaced easily and independently.

Source: Intel

Page 13: Optical Trends in the Data Center

The Need for Speed: 10/40/100G

Page 14: Optical Trends in the Data Center

Server Virtualization Drives Higher Data Rates

• Multiple applications running in parallel on one server. (eg., 20 to100 apps/server)

• 10 to 50 servers consolidated • Increases utilization efficiency to =< 90%• Multi-core processors (4,8,16..50)• PCIe2: 8 lanes @ 5G 8b/10b• PCIe3: 16 lanes @ 8G 128/130b• Increased Memory • Less connectivity and reduced electronic

ports – Drive utilization of high BW optical

connectivity to mitigate system bottlenecks and support required increased I/O speeds.

Page 15: Optical Trends in the Data Center

Server Virtualization Drives Higher Data Rates

Source: Dell’Oro. 07/ 2012

Page 16: Optical Trends in the Data Center

Data Centers Are Going through Data Rate Increases

• Core switch

• 100 GbE non‐blocking

• Metro/Campus 100G connectivity LR/ER

• WAN connectivity through 100G DWDM

• Servers to TOR switch

• 40 x 10 GbE 

Copper 

or Fiber

• TOR switch

• 400G Rack Capacity

• 2‐4 X 40/100 GbE uplinks

• QSFP/CFP/CFP2 (MMF/SMF)

TOMORROW

• Core switch

• 10 GbE non‐blocking

• Metro/Campus 10G connectivity LR/ER

• WAN connectivity through 40G DWDM

• Servers to TOR switch

• 40 x 1 GbE 

• Copper CAT5, 100 m

• TOR switch

• 40 G Rack Capacity

• 2‐4 X10 GbE uplinks

• SFP+  MMF or SMF

TODAY

1G

Nx40/100G

10G

40/100G10G

Nx10G

Page 17: Optical Trends in the Data Center

10G/40G/100G Ethernet Switch Line Card Density

Source: 100GbE Electrical Backplane/Cu Cable CFI IEEE 802 Plenary, Dallas, TX, Nov 2010

96 fibers OM3/OM4528 fibers OM3/OM48 fibers SMF (CWDM)768 fibers OM3/OM4

Page 18: Optical Trends in the Data Center

Legacy Data Center Collapsed Architecture: EDGE, Aggregation and Core Switch Consolidation into MDA

Page 19: Optical Trends in the Data Center

Data Center Flat Architecture: Top of Rack EDGE Switch

• 4 QSFP+ 40G ports • 48 SFP+ ports – 10G or 1G• 48 server connections

Source: Blade Networks Technologies

Source: FCI

Page 20: Optical Trends in the Data Center

Top of Rack EDGE Switch/Server Interconnect: 10GBASE-T

• Significant switch power requirements• 10G copper 4 to 5 watts

per port (40 nm)• Need <1w/10m (LOM)• Major silicon chip (28 nm)

development required to reduce power (2014-2015)

– Yield and $$$ issues• 10G optical switches 1 to 4

watts per port– Typical SFP+ 0.5 watts

• 10G copper Latency – 2us/PHY

10GBASE‐T Card

Source: Fulcrum Microsystems

Page 21: Optical Trends in the Data Center

Top of Rack EDGE Switch/Server Interconnect: 10G SFP+ Active Optical Cables

• Direct-Attached SFP+ Transceiver– Distribution to edge interconnect– Edge to server interconnect

• Performance Attributes:– Low cost

• Optical components selected to designed distance

• Reduced manufacturing and testing costs

– Small diameter, low weight & flexible– Distance capability (=>10 m)– No cleaning connector concerns– No connector insertion loss concerns– No mismatch transceiver concerns

SFP+ AOC

Source: Cisco

Page 22: Optical Trends in the Data Center

Data Center Flat Architecture: Top of Rack EDGE Switch

• 16 QSFP+ 40G ports • Each QSFP+ port can handle

four 1/10G server connections • 64 x 1/10G server connections

Source: Cisco MPO to LC Harness

Source: INF‐8438i

Twinax Harness

Page 23: Optical Trends in the Data Center

Data Center Flat Architecture

Today’s Data Center

ToR or EoR Switches

Source: Cisco Data Center Infrastructure 2.5 Design Guide, Cisco Validated Design I, December 6, 2007.

Tomorrow’s Data Center

Source: Data Center Basics and the Role of Optical Fiber, Discerning Analytics, January 30, 2012.

Trends toward collapsed architecture that accommodates more east/west vs. north/south traffic Low latency and low oversubscription required in new systems

Servers***

Data Center prioritized list* 1st priority** (close) 2nd priority*** (distant) 3rd priority

Page 24: Optical Trends in the Data Center

Data Center Trend: Flat Architecture

Source: Cisco

Page 25: Optical Trends in the Data Center

Data Center Trend: Network Monitoring

• Network layer data must first be extracted in order to apply the analysis tools– SPAN (mirroring) ports (active)– Port Tap(passive for optical)

• What is monitoring looking for?– Security threats – Performance issues – Optimization (I/O bottlenecks)

Page 26: Optical Trends in the Data Center

SAN

Core Switch

Distribution Switch

Access Switch

Server Device

SAN Director Switch

Storage Device

WAN/ISP

Data Center Trend: Network Monitoring

Local Area Network (LAN)

IP Network

Ethernet monitoring is growing from core port monitoring all the way down into access layer

Fibre Channel monitoring is typically between switch and storage array

Page 27: Optical Trends in the Data Center

Data Center Trend: Optical Tap

• Network Security and Performance

• OPEX savings through improved operations

• Better tracking and adherence to SLAsenables move to cloudinfrastructures

Page 28: Optical Trends in the Data Center

Configuration Options

Configuration BIntegrated MPO/LC

Configuration CIntegrated MPO/MPO

Configuration ANon-Integrated LC/LC

Page 29: Optical Trends in the Data Center

Data Center Trend: Optical Tap Example

Monitor Device

Data Rate Link Insertion Loss (coupler) 

Fiber Type

Link Distance

8G Fibre Channel 70% LIVE 2.2dB (std) OM4/OM3 5/‐m

8G Fibre Channel 70% LIVE 1.8dB (Corning) OM4/OM3 85/75m

Device 1

Device 2

Page 30: Optical Trends in the Data Center

Traditional Data Center Fabrics

• Ethernet – LAN (1/10/40/100G)– OM3/OM4 Fiber– Non-deterministic

• Fibre Channel– SAN (4/8/16G)– OM3/OM4 Fiber– Deterministic

Source: Info‐Advantage

Page 31: Optical Trends in the Data Center

Fibre Channel speedMAP

• “FC” used throughout all applications for Fibre Channel infrastructure and devices, including edge and ISL interconnects. Each speed maintains backward compatibility at least two previous generations (I.e., 8GFC backward compatible to 4GFC and 2GFC)

• Line Rate: All “…GFC” speeds listed above are single-lane serial stream I/O’s. All “…GFCp” speeds listed above are multi-lane I/Os‡ Dates: Future dates estimated

ProductNaming

Throughput(MBps)

Line Rate(GBAUD)

T11 SpecTechnically

Completed (Year)‡

MarketAvailability(Year)‡

FC

1GFC 200 1.0625 1996

2GFC 400 2.125 2000

4GFC 800 4.25 2003

8GFC  1600 8.5 2006

16GFC 3200 14.025 2009

1997

2001

2005

2008

2011

32GFC 6400 28.05 2013

64GFC  12800 TBD 2016

128GFC 25600 TBD 2019

2015

Market Demand

Market Demand

256GFC  51200 TBD 2022

512GFC 102400 TBD 2025

Market Demand

Market Demand

128GFCp 25600 4X28.05 2014 2015

1TFC 204800 TBD 2028 Market Demand

Page 32: Optical Trends in the Data Center

Fibre Channel4/8/16G Variants: OM2, OM3, OM4

FC-0 400-M5-SN-I 800-M5-SN-S 1600-M5-SN-SData Rate (MB/s) 400 800 1600

Operating Range (m) 0.5-150 0.5-50 0.5-35

Loss Budget (dB) 2.06 1.68 1.63

Multimode Cable Plant for OM2 Limiting Variants

FC-0 400-M5E-SN-I 800-M5E-SN-I 1600-M5E-SN-I

Data Rate (MB/s) 400 800 1600

Operating Range (m) 0.5-380 0.5-150 0.5-100

Loss Budget (dB) 2.88 2.04 1.86

Multimode Cable Plant for OM3 Limiting Variants

FC-0 400-M5F-SN-I 800-M5F-SN-I 1600-M5F-SN-I

Data Rate (MB/s) 400 800 1600

Operating Range (m) 0.5-400 0.5-190 0.5-125

Loss Budget (dB) 2.95 2.19 1.95

Multimode Cable Plant for OM4 Limiting Variants

Page 33: Optical Trends in the Data Center

Fibre ChannelFC-PI6 32G

• Fibre Channel March 2010– 32G activity started– Expected completion: 2013– Commercial products: 2014

• Approved Objectives– OM3/OM4 70 m to100 m– SMF 10 km– SFP+ form factor

SFP+

70m OM3 100m OM4

Eye diagram shows distortions caused by

jitter and ISI

Source: Avago

Page 34: Optical Trends in the Data Center

FC-PI6 Project - 32G Optical Objectives

• Backward compatibility to 8GFC and 16GFC• Same external connectors as present connector

– LC and SFP+• Cable length of 100 m on OM4 cables

– Duplex fiber– Serial transmission– FEC

• Power goal at the port is less power per port than comparable 40GE port in 2014/15 timeframe

• Auto-Negotiation down to 8GFC and 16GFC• Products ship in 2014

Page 35: Optical Trends in the Data Center

IEEE Ethernet 802.3 40/100G

• IEEE 802.3 • 40 and 100 Gbps • At least 100 m on OM3 multimode fiber• At least 150 m on OM4 multimode fiber• At least 10 km on single-mode fiber• At least 40 km on single-mode fiber (100G only)• At least 2 km on single-mode fiber (40G only)• At least 7 m on copper cable assembly

• Key project dates• 802.3ba 40/100G standard completed June 2010• 802.3bg 40G standard completed March 2011

Page 36: Optical Trends in the Data Center

40G Ethernet Parallel Optics: OM3/OM4

12F MPO Connector Interface

QSFP Transceiver Source: Avago

Page 37: Optical Trends in the Data Center

Ethernet 40G and 100G: OM3/OM4

OM3 and OM4 distances contingent upon 1.5 and 1.0 total connector loss, respectively

Page 38: Optical Trends in the Data Center

40G Optical Transceiver: OM3/OM4

QSFP Transceiver technology• Standard 12F MPO

Connector• =< 1.0 watts per port• Now used for 40/64G

InfiniBandSource: Zarlink

Page 39: Optical Trends in the Data Center

40G eSR4 QSFP+

• 40G eSR4 Parallel Optics Extended Reach QSFP+ Transceiver• OM3/OM4:

• 300 m / 400 m (Industry Connectivity)• 330 m / 550 m (CCS Connectivity)

• CCS modeling shows 12% data center lengths >100 m• Internal testing demonstrated 1250 m with random

transceiver and OM4 fiber.• Commercially available now

Page 40: Optical Trends in the Data Center

Potential 40G Solutions

Page 41: Optical Trends in the Data Center

100G Ethernet Parallel Optics: OM3/OM4

24F MPO Connector Interface

Source: USConec

Page 42: Optical Trends in the Data Center

100G Optical Transceiver: OM3/OM4

• CXP Transceiver technology• Standard 24-fiber MPO

Connector• =< 3 watts per port

Source: Molex24F MPO Pinless Connector

Page 43: Optical Trends in the Data Center

100G Polarity: Two-Row Transceiver Connectivity

Note: Only one scheme shown, but examples of several methods will be shown within the standard.

No industry traction expected for 802.3ba 100GBASE-SR10

Page 44: Optical Trends in the Data Center

40/100G Optical Transceiver: SMF

CFP Transceiver technology• Standard duplex LC

Connectors• =< 20 watts per port• 3 to 4 ports per card• Large footprint

• Equivalent to two 10G XENPAKs

Source: CFP MSA

Duplex LC Connector

Page 45: Optical Trends in the Data Center

40/100G: Twinax Copper

• Traditionally, used for short-length InfiniBand connectivity and must be factory-connectorized

• No guidance included in the Ethernet 802.3ba standard for CAT UTP/STP copper cable

Source: FCI

QSFP Direct-Attached Twinax Cable

Page 46: Optical Trends in the Data Center

IEEE 40GBASE-T

• 40GBASE-T Study Group Approved July 2012

• 40GBASE-T 802.3bq Task Group Approved May 2013• Chip and Copper Cable Manufacturers Driving Interest

• Support is likely to be slow• Projected standard completion: 2015

• Baseline objectives: • Four-pair, balanced twisted-pair copper cabling • Up to two connectors• Up to at least 30 m

• Cable: Cat8 Shielded• 2000 MHz

Page 47: Optical Trends in the Data Center

IEEE 802.3bm 40 and 100G Fiber Optic Cables Task Group

Task Group will develop guidance in accordance with approved distance objectives

– Define a 40 Gb/s PHY for operation over at least 40 km of SMF– Duplex fiber, single wavelength expected

– Define a 100 Gb/s PHY for operation up to at least 500 m of SMF– Options under consideration include: 4x25G parallel optics (8F), duplex fiber wave 

division multiplexing (WDM), and duplex fiber Pulse Amplitude Modulation (PAM) (probably will not happen – no consensus)

– Define a 100 Gb/s PHY for operation up to at least 100 m of OM4− 4x25G parallel optics (8F) expected 

– Define a 100 Gb/s PHY for operation up to at least 20 m of OM4– 4x25G parallel optics (8F) expected  (probably will be removed – not needed based 

on minimal economical value)

• Expected Standard completion date: May 2015

Page 48: Optical Trends in the Data Center

Market Trends Toward 100GE 100GE (4x25G) pluggable form factor is still evolving!

Page 49: Optical Trends in the Data Center

Future Multimode 100G: Relative Connectivity Cost/Circuit

PMD Max Distance (m)

Fiber Count

Relative Connectivity Cost /Circuit at 100m

100GBASE-SR10-OM3 100 20 1.53100GBASE-SR10-OM4 150 20 1.78100GBASE-SR4-OM3 100 8 1100GBASE-SR4-OM4 150 8 1.13100GBASE-NR4-SM no WDM

2000 8 0.81

100GBASE-NR4-SM with WDM

2000 2 0.24

100GBASE-LR4-SM CFP2 10000 2 0.24

100GBASE-LR4-SM CFP 10000 2 0.24

Page 50: Optical Trends in the Data Center

Future Multimode 100G: Relative Transceiver Cost/Circuit

PMD Max Distance (m)

Relative Module Cost

Range

100GBASE-SR10 150 1 1

100GBASE-SR4 100 1 0.8, 1.2, 1.6

100GBASE-NR4 no WDM 2000 5 1, 3 - 6

100GBASE-NR4 with WDM 2000 6 5-8

100GBASE-LR4-SM CFP2 10000 25 5-25

100GBASE-LR4-SM CFP 10000 32 15-32

Page 51: Optical Trends in the Data Center

Future Multimode 100G: Relative Link Cost/Circuit

0

5

10

15

20

25

0 500 1000

100GBASESR10-OM3100GBASESR10-OM4100GBASESR4-OM3100GBASESR4-OM4100GBASENR4 - NoWDM100GBASENR4 -with WDM100GBASELR4CFP2100GBASELR4 CFP

Page 52: Optical Trends in the Data Center

IEEE High-Speed Ethernet Industry Connections

• Interim Meeting September 2012– Scope focused on building

consensus related to the next speed of Ethernet for wire line applications

– Straw polls showed strong support for 400G as next generation Ethernet

– Study group approved March 2013

• Most likely first designs will be 16x25G (32 MMF)

Source: USConec

Page 53: Optical Trends in the Data Center

Generations of 400GbEFormative Stages of 400GbE Still Evolving

Generation 1st Gen 2nd Gen

Optical Module CDFP – Copper and MMF

4X CFP4 ‐ SMF CFP2 – MMF and SMF

Electrical Interface (Gb/s)

CDAUI–16 16 lanes of retimed 25G

CDAUI–16 16 lanes of retimed 25G

CDAUI–8 8 lanes of 50G

Availability 2016 2016 2020?

C = 100CD = 400D = 500

Source: Brocade

Page 54: Optical Trends in the Data Center

When Is Terabit Coming?

Key:

EthernetEthernet ElectricalInterfacesHollow Symbols = predictionsStretched Symbols = Time Tolerance

1T

100G

10G

1G

400G

40G

nx10.3125G

2010 2015 2020 2025

Standard Completed

nx25.8G

400GbE16X25G

100GbE10X10G

40GbE4X10G

Dat

a R

ate

and

Line

Rat

e (b

/s)

100GbE4X25G

nX50G

400GbE8X50G

400GbE4X100G

100GbE1X100G

TbE?10X100G

nX100G

1.6TbE?16X100G

Source: Brocade

Page 55: Optical Trends in the Data Center

40/100G Data Center Architecture: Top of Rack EDGE Switch

40/100G: 2 x 24-fiber OM3/OM4 Uplink

Page 56: Optical Trends in the Data Center
Page 57: Optical Trends in the Data Center

Duplex LC 10G ports

Duplex LC 10G ports + 12‐fiber MPO 40G / 100G ports12‐fiber MPO 40G ports and 100G ports

Optical Connectivity 40/100G OM3/OM4

10G LC modules independently changed out for 40G and/or 100G MPO panels

Page 58: Optical Trends in the Data Center

Contact Info

• Doug Coleman

• E-mail: [email protected]

• Phone: 828-901-5580

• Fax: 828-901-5488

• Address: 800 17th Street NW Hickory, NC 28601