ace - comcore

66
Aakash Agarwal Email: [email protected]

Upload: aakash-agarwal

Post on 08-Aug-2015

80 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: ACE - Comcore

Aakash AgarwalEmail: [email protected]

Page 2: ACE - Comcore

Agenda• What is Load Balancing• Why and When to use Load Balancing?• The Use of Load Balancers• Load balancer Enhancement• Type of Load Balancers• Load Balancing Concepts • Application Traffic and Suggested Predictors• Layer 4 Versus Layer 7 Switching• Connection Management• Address Translation and Load Balancing• Offloading Servers

– SSL Offload– TCP Offload– HTTP Compression

Page 3: ACE - Comcore

Agenda

• Application Environment Independency • ACE Virtual Contexts• ACE Physical Connections• ACE VC Creation and Allocation• Integrating ACE VC in DC Environment • Allowing Management Traffic• LOAD BALANCING CONFIGURATION• Virtual Context Fault Tolerance

Page 4: ACE - Comcore

Training Prerequisites

• Understanding the Normal Network Design• Understanding of Virtualization• Understanding & Experience with Layer2 Switching• Understanding & Experience with Routing

Page 5: ACE - Comcore

Load Balancing

Page 6: ACE - Comcore

What is Load balancing• Load balancing is a method for distributing workloads across multiple

computing resources. • Load balancing aims to optimize resource use, maximize throughput, minimize

response time, and avoid overload of any one of the resources.• Load balancing is not meant to be sharing the load only but also to provide

redundancy.– Server Load Balancing– Network Load Balancing

• Never seen before? You did ;)– Look at you on two legs? One leg can be much loaded, other leg does load

balancing plus provides you redundancy ..kidding– When you open Google.com does it go to same server all the time No

because there is load balancing in a form (CDN)try –nslookup:

Page 7: ACE - Comcore

Why and When?• Why and When to use Load Balancing?

– Load balancing, by its very nature, is the solution for more than one problem. Today you might have application on a single server when the need begins to arise you might either need to upgrade server or Application on multiple servers, that’s where Load balancing comes in to the play- It offers below benefits:

• Limiting your points of failure Failover and redundancy• Load Distribution. Growing beyond a single server configuration:

Page 8: ACE - Comcore

The Use of Load Balancers• Dedicated network load balancers have been heavily used since second

half of the 1990s• They were created to scale the performance of websites, and their use has

increased in data centers as they incorporate new features and functions.• They were Originally created to improve Server load balancing for DNS to

provide better response to each client requesting for name resolution.

Page 9: ACE - Comcore

Load balancer Enhancement• DNS load balancing can be easily deployed but had lot of problems like:

– The DNS servers are not aware of the application state in the balanced servers. Clients may receive the IP address of a failed server

– The load-balancing service DNS servers can provide does not take into account any load information from the balanced servers. Can easily overload a server.

– A DNS request does not specify which type of traffic the client will use afterward or the type of device (tablet, phone, or desktop) the client is. Hence, the choice of the best server cannot be defined by these parameters.

• A complete load-balancing paradigm was made available with the creation of hardware-based load balancers- Layers 4 to 7 parameters– TCP destination port– UDP destination port– HTTP URL– HTTP session cookie– Strings recognized in the connection data

Page 10: ACE - Comcore

Type of Load Balancers• There are type of Load balancers based on the function they

have:

– Hardware Based• Local Load Balancers• Global Load Balancers

– Software Based• Local Load Balancers• Global Load Balancers

Page 11: ACE - Comcore

• Whereas a great variety of load-balancing devices exist in the market (and even inside Cisco), every load balancer deployment has common elements and definitions:– Real Servers: Represent the addresses from servers that will receive the sessions from

Load Balancer. Basically a Real Server e.g. a VM or a physical server

– A Server farm: Set of Real Servers that share the same application A real server can belong to multiple server farms.

• ****Server farm is not a Server cluster****. A cluster is defined as a set of servers with an additional layer of software that allows some kind of centralized administration and internal information sharing between its members. In contrast, a server farm simply characterizes a group of servers that have the same application, be they part of a cluster or not.

Load balancing Concepts

Page 12: ACE - Comcore

Load balancing Concepts (Cont.)• Probes: Are basically Synthetic requests the load balancer creates to check

whether an application is available on real server or serverfarm*. They can be ICMP echo requests or as sophisticated as a HTTP GET query

• THE Virtual IP or VIP: Is an address load balancer uses to receive Client connections. This IP address is provided by DNS servers as a Naming resolution to the application URL and advertised to clients. Is the Place all configuration of Load balancing is bound in single thread.

Page 13: ACE - Comcore

Load balancing Concepts (Cont.)• The Stickiness Table: which is an optional element that can store client

information during its first access. The load balancer can use this information to always forward the client subsequent connections to the first selected server, thus maintaining session states inside the same server. Examples of stored client information are source IP address, HTTP cookies, and special strings.

• Predictor: A Method used to distribute the traffic between Servers in Serverfarm, Wide options are available in predictors , Round Robin is Default in ACE:– Round Robin– Least Connection– Least Load– Hashing– Many Others URL basis etc.

Page 14: ACE - Comcore

Load balancing Concepts (Cont.)

Page 15: ACE - Comcore

Application Traffic and Suggested Predictors

Page 16: ACE - Comcore

Layer 4 versus Layer 7 Switching• When a LB receives a new connection on a VIP – It selects server farm based on

the client connection parameters e.g. IP address, IP protocol, TCP/UDP Protocol, Cookies etc.

– Layer 4 – When a LB is performing L4 switching, all the information it needs to select the best server for a new connection is contained in the TCP SYN (or UDP first data gram). The LB does not consider connections differently that have different parameter in data payload.

Page 17: ACE - Comcore

Layer 4 versus Layer 7 Switching

Page 18: ACE - Comcore

Layer 4 versus Layer 7 Switching• Layer 7 Switching: A LB must perform decision beyond the transport protocol. Server

selection must wait until the client sends relevant information from the session, presentation or application layer. The LB becomes a transparent TCP proxy, establishing the connection with the client on behalf of the real servers. The spoofing process is called “Delayed Binding” or “Proxy connection”.

• Layer 7 switching happens when the load balancer forwards a connection to a server using the information obtained from upper layer (5,6 and 7) of the OSI

Page 19: ACE - Comcore

Layer 4 versus Layer 7 Switching

Page 20: ACE - Comcore

Connection Management• Layer 4 – In this scenario the LB must co-ordinate rewrite on Ethernet, IP, and TCP/UDP

information from the original client connection to the communication with the selected server (After all, they all connect to VIP)

• Layer 7 – LB needs to control two completely different connections, with distinct parameters such as checksum and sequence calculation. The co-ordination between these called “Splicing”

• In some cases LB is required to do more connection management than this where LB is directly dispatching connection

• LB can do Symmetric and Asymmetric Connection management:– Symmetric – All Packets, be that from Client or Selected server always reach the LB. Because LB is aware of the entire

communication, it can deploy more advanced server load-balancing features such as Layer 7 switching, IP address translation, and header manipulation. This is MOST popular connection management mechanism.

– Asymmetric – When only part of connection traverse the LB, this method is advantage of not overloading LB from excessive return traffic from servers (such as video streaming etc) BUT LB can only see one side of traffic that is client Server, multiple load balance features cannot be deployed such as address and port translation. Timeout for TCP connections are usually configured in the LB because it will never receive FIN from server.

Page 21: ACE - Comcore

Connection Management

Page 22: ACE - Comcore

Connection Management

Page 23: ACE - Comcore

Address Translation and Load Balancing

• Deploying NAT and PAT is fairly easy for the devices can handle the upper layer parameters such as HTTP URLs: – Server NAT (Symmetric) - Good when you have Servers on Private IP Addressing – DUAL NAT – To Hide Source and Destination both– Port Redirection – Servers receiving connection on non standard ports– Transparent

• Explanation is on other slides –– Phase 1- Client to Load Balancer – Phase 2 – Load Balancer to Server– Phase 3 – Server to LB– Phase 4 – LB to Client

Page 24: ACE - Comcore

Address Translation and Load Balancing

Page 25: ACE - Comcore

Address Translation and Load Balancing

Page 26: ACE - Comcore

Address Translation and Load Balancing

Page 27: ACE - Comcore

Address Translation and Load Balancing

Page 28: ACE - Comcore

Offloading Servers• Load Balancers can also provide additional services to servers, offloading them

from hardware-comsuming operations. The most common offload service they provide are encryption, Authentication, connection processing and compression.

• It allows server resources to the main application enabling better response time and performance to the users

• Below are 3 advance offload services that a LB can provide:– SSL Offload– TCP offload– HTTP Compression

Page 29: ACE - Comcore

SSL Offload• SSL Offload: SSL is a protocol created by Netscape in 1990s, that provide security

for Internet connections. SSL ensures CAI (Confidentiality, Authentication and Integrity) – In 1999 IETF introduced standard version of SSL is called TLS

• Both SSL and TLS act between Transport and Session Layer • After TCP session is established between a client and server, the SSL connection

participates performing key exchanges, and negotiates appropriated encryption algorithm.

• When SSL connection is established, the upper layer protocol can now send data using SSL as its own security transport Layer. The Most protocol used over SSL is HTTP

• SSL offload relieves server from intensive encryption processing. The benefits would be:

– Total offload of encryption from servers– Layer 5 to 7 awareness for Layer 7 switching in SSL connection– Saving on public certificates as only LB would need it real servers would not

Page 30: ACE - Comcore

SSL Offload

Page 31: ACE - Comcore

SSL Offload• A LB can perform SSL offload can act as an SSL server to the client, an SSL client to

the server, or both. Three deployment options for SSL:

Page 32: ACE - Comcore

TCP Offload• When a server is performing TCP communication it must execute below:

– Connection Establishment (3 Way handshake)– Acknowledgements of the Segments– Checksum and sequence number calculation– Sliding window calculation– Congestion control– Connection termination

• Depending on number and characteristics of the connection server can spend great part of its CPU, Memory and other resources.

• A LB can use its Connection management feature to offload web servers from TCP excessive processing.

• Instead of dealing with the totality of the session sent by the users, a LB can send all the data from the connections inside one or two connections for this server. This is known as “TCP Reuse” or “TCP Multiplexing”

Page 33: ACE - Comcore

TCP Offload

Page 34: ACE - Comcore

HTTP Compression• The Majority of Webservers and browsers have the capability to, compress and

uncompressed transmitted objects in order to:– Better use the available B/W for both of them– Improve the Web Page response time

• The most common compression used by browsers are GZIP and Deflate. The compress operation usually consume considerable server resources like CPU, Memory.

• Amount of Compression object could seriously damage application performance of a Web server

• In this case LB could observe the which type of compression mechanism is used by the client browser on behalf of Web server, compresses all objects to this client.

Page 35: ACE - Comcore

Application Environment Independency

• Multi-tier applications are very common in most of the corporate DCs. The Client-Server popular Architecture separates functions among groups of the server to create flexible application, where a server tier can be easily be replaced or rewritten.

– Presentation Tier– This layer is responsible for front end communication with the clients, and generally uses Web technology – Web servers

– Application Tier – This is group of servers controls the Business logic– Data Tier – This is where information is stored and retrieved, DB servers are usually the components

of the Tier

Page 36: ACE - Comcore

Application Environment Independency

• When a DC houses lots of independent customers, the number of required LB can be even be bigger. These are called Multitenant Data Centers, and they can belong to Service provider or to the Parent corporation.

• This requirement can mean the separate devices for different customer environments, especially if the deployed LB do not have any form of management isolation of configuration elements such as real servers, server farms, Probes, and VIP.

• However it might be possible that single customer requirement is way lesser than what a LB can handle basically using Same LB but separation of management plane to manage multiple customers.

Page 37: ACE - Comcore

ACE Virtual Contexts• Cisco created first hardware-based load balancer in 1996, it was called CLD (Cisco Local Director), in 2000

successor came called CSS (Cisco Service Switches) 11000 and CSM Module for CAT 6500 Switches.• To address challenge of low utilization explained in last sheet, Cisco created concept of Virtual Contexts

and applied it in ACE product series. • An ACE Virtual Context is an abstraction of an independent LB with its own interfaces, configuration and

policies and administrators. • ACE normally can handle 250 Virtual contexts • The Creation and configuration of Virtual contexts are done through management access built in Admin

Context• The “Admin“ context is automatically created when an ACE is configured for the first time. • It is NOT recommended to use “admin” context for LB• ACE 4710 can have 20 VC• ACE Module can have 250 VC

Page 38: ACE - Comcore

ACE Virtual Contexts

Page 39: ACE - Comcore

ACE Physical Connection• Each ACE form factor has different ways to connect to network:

– Connecting ACE Appliance: • ACE 4710 has four 1000BASE-T Gig Interfaces• These interfaces can be connected to a single switch or up to four different switches

Page 40: ACE - Comcore

ACE Physical Connection ACE 4710 Config

Page 41: ACE - Comcore

ACE Physical Connection Switch Side

Page 42: ACE - Comcore

ACE Physical Connection ACE Module:

Page 43: ACE - Comcore

ACE VC Creation and Allocation

Page 44: ACE - Comcore

ACE Basic Commands

• Moving between Contexts:

• Verifying Contexts:

Page 45: ACE - Comcore

Integrating ACE VC in DC Environment

• A LB is a Network Service device must exchange traffic with a Network to Function properly

• ACE VC has SVIs and BVIs as oppose to Physical LB• A VLAN manipulation is only required to insert a VC• Main Three designs:

– Routed Mode – Bridge Mode– One Arm Mode

Page 46: ACE - Comcore

Routed Mode LB• A LB VC performs the function of a router, connecting different IP subnets• When an SVI is configured (Using Interface Vlan command with an IP address) the context is

automatically enables routed mode.• VIP can belong to either of the subnet (or even to different one) but it is mandatory that VIP

is routable for client • The Server response back to client Forced via ACE using Routing Protocols• Since VC acts as a router in between it is possible to assign RFC 1918 IP address range to

internal servers where VIP can have public IP addresses• ACE VC supports only Static Routing (No Dynamic Routing Protocols)

Page 47: ACE - Comcore

Routed Mode LB

Page 48: ACE - Comcore

Routed Mode LB

Page 49: ACE - Comcore

Bridge Mode LB• A LB VC performs the function of a Transparent Bridge• Learns the MAC addresses from the devices directly connected through ARP• When BVI Interface is configured the context automatically becomes bridge context• Each Context can only bridge two VLANs• Bridge Design permits Two VLANs to be mapped to single IP subnet• This Configuration forces response server traffic traverse the VC without tweaking• The BVI is accessible from both VLANs and can be used for management purposes

Page 50: ACE - Comcore

Bridge Mode LB

Page 51: ACE - Comcore

Bridge Mode LB

Page 52: ACE - Comcore

Bridge Mode LB• A LB VC performs the function of a Transparent Bridge• Learns the MAC addresses from the devices directly connected through ARP• When BVI Interface is configured the context automatically becomes bridge context• Each Context can only bridge two VLANs

Page 53: ACE - Comcore

One ARM Mode • You configure the ACE with a single VLAN that handles both client requests and server

responsesIt does not detect Client-server traffic• This design is useful when the LB traffic is small compared to total traffic sent to server• For Symmetric load balancing one of two methods must be chosen:

– DUAL NAT– Policy Based Routing also known as PBR

• In this design server detects an IP address from ACE VC as the source so the responses are directed back to the ACE VC

• Problem with this design is Server cannot see the original IP address from the client• However for HTTP connections, An ACE can insert the original client IP address in the HTTP

header that makes it possible for servers to know the client IP

Page 54: ACE - Comcore

One ARM Mode (NAT)

Page 55: ACE - Comcore

One ARM Mode (PBR)

Page 56: ACE - Comcore

Allowing Management Traffic• Create a Management Class map that defines the Management Protocol• Create a Policy Map that will permit the class map• Apply the policy map to an interface, to a group of interfaces, or to entire context• An Admin context also needs a similar config

Page 57: ACE - Comcore

Allowing Management Traffic• When you turn router on it starts to route packets• When you turn a FW on it starts to drop packets• When you turn a LB (ACE) on it starts to drop packets unless you specify• LB is a stateful device it keeps the track of the connections rather packets• ACE treats UDP flow as a connection (exchanged between Client and server with same port), • ACE Treats ICMP flow as connection•

Page 58: ACE - Comcore

LOAD BALANCING CONFIGURATION

Page 59: ACE - Comcore

REAL SERVER CONIFG

Page 60: ACE - Comcore

PROBE/SREVER FARM CONFIG

Page 61: ACE - Comcore

Layer 7 Class Map/Mobile Server Farm

Page 62: ACE - Comcore

Layer 7 Policy Map/Layer 4 Class Map

Page 63: ACE - Comcore

Multi-Match Policy Map

Page 64: ACE - Comcore

Virtual Context Fault Tolerance

Page 65: ACE - Comcore

Extra Links

Extra Links:

ACE 4710 - http://www.cisco.com/c/en/us/products/collateral/application-networking-services/ace-4710-application-control-engine/Data_Sheet_Cisco_ACE_4710.html

ACE 30 - http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/ace-application-control-engine-module/data_sheet_c78_632383.html

Page 66: ACE - Comcore