web server support for tired services telecommunication management lab m.g. choi
Post on 30-Dec-2015
219 Views
Preview:
TRANSCRIPT
Web Server Support for Tired Services
Telecommunication Management Lab M.G. Choi
1. Introduction
2. Servers and QoS
3. Architecture for Server Based QoS
4. Prototype
5. Result
6. Summary
ContentContent
IntroductionIntroduction The Network QoS has been focused for Performance The benefits of Non-isochronous applications such as we
b pages from QoS Example ; North America’s EC traffic increase The necessity of Server QoS
• The mechanisms and policies are need for establishing and Supporting QoS. Network QoS is not sufficient to support end to end QoS
The hypothesis that servers become an important element in delivering QoS
The research Question to be answered• Impact of Internet Workloads on servers.• Impact of server latency on end to end latency.• Server mechanism to improve Quality of Service• The way to protect server from overload• The way for server to support tiered user service
level with unique performance attribute The Importance of requirement on servers
and networks Show the increasing role of the servers to
provide end to end and the potentiality of tired services
Introduction (Cont’d) Introduction (Cont’d)
Server and QoSServer and QoS
Empirical Study• Instrument and monitor one of large ISPs in
North America in 1997• Quantifying the delay components for web,
news and mail server• The Network Typology [figure 1]• A mixture of active and passive
measurement technique • The nntp server response time [figure 2]• The coast to coast network response time
[figure3]
Trends affecting the complex E-commerce Trends affecting the complex E-commerce ApplicationsApplications
The trends increasing network performances
• The decreasing Network latency due to increasing capacity of network backbone
• The guaranteed network latency by the ISP• The caches becomes more pervasive
The trends increasing Server latency time• The Flash • The new application technologies[JAVA, SSL, DB,
M/W]• The Media with much richer, larger, more image
[Audio Voice, Video]
The Overload Server Causing The Overload Server Causing poor end to end QoSpoor end to end QoS
The Measurement of busy Web Sites The response rate grows linearly until the
server nears maximum capacity in terms of HTTP request [figure 4]
The HTTP and a User transaction [figure 5]
Over-Provision ServerOver-Provision Server
The evolution of web applications grows very steeply in the client demand curve
Now, Internet Applications have unaccountable client population
No reasonable amount of H/W can guarantee predictable performance for flash crowds
The over-provisioning of servers can not provide tired services or application
Network QoS can not solve scheduling or bottleneck problems at Server and ignored by server FIFO
The Server QoS mechanism supports tired service, and to provide overload protection
An architecture Servers consisting of multiple node web, application and database
The philosophy to create a low overhead, scalable infrastructure that is transparent to application, web servers
The two goals to support two key capabilities• The architecture manages effectively peaks in client HTTP re
quest rates• To support tiered service levels that enable preferential treat
ment of users or services (to improve performance of premium tires)
The architecture to be presented [Figure 6] The request class is introduced for tired service The architecture supports integration with network Qo
S mechanisms and management systems
Architecture for Server Based QoSArchitecture for Server Based QoS
Architecture for Server Based QoSArchitecture for Server Based QoS
Related Work and PrototypeRelated Work and Prototype Related Work
• The operating systems control mechanism to ensure class-based performance in web servers
• The scheduling of web server worker processes with the same priority
• The research of intelligent switch or router Prototype
• Modifying the FIFO servicing model of Apache Ver.1.2.4.
• The identical worker processes that listen on a UNIX socket for HTTP connections and serve requests
• The connection manager, request classifier, admission controller, request scheduler and resource scheduler
Connection ManagerConnection Manager
A new unique acceptor process that intercepts all requests
Classifying the request and Placing the request on the appropriate tier queue
The connection manager must run frequently enough to keep request
queues full • Worker processes may execute requests
from lower tires• Premium requests are prohibited from
establishing a TCP connection and thus drop
Request ClassificationRequest Classification To identify and classify the incoming
requests of each class The classification mechanism are user
class based or target-class based The User Class Based (Source of Request)
• Client IP address• HTTP cookie• Browser plug-ins
The Target Based • URL Request type or file name path• Destination IP address
Admission ControlAdmission Control
` When the server is saturated, the
Admission Control Rule (Premium > Basic)
The two admission control trigger parameter • Total Requests Queued • Number of Premium Requests Queued
Rejection is done by simply closing the connection
Request and Resource Request and Resource Scheduling Scheduling To process request, selection of requests is
based on the scheduling policy The scheduling policy may have many
options for processing Several Potential Policies
• Strict Priority • Weighted Priority • Shared Capacity• Fixed Capacity• Earliest Deadline First
Resource Scheduling provide more resources to premium request and less resources to basic request
Apache Source ModificationApache Source Modification The number of Apache code changes is minimal http_main.c modification
• Start the connection manager process, setup queues• Change the child Apache process to accept request from connec
tion manager not HTTP socket Additional connection_mgr.c is linked
• The classification policy, enqueue mechanism, dequeue policy, connection manager process code
Additional shared memory and semaphores• The state of queues, each class queue length, number of reque
sts executing in class• last class to have a request dequeued, total count of waiting req
uests on classes• Access to shared memory is synchronized through Semaphore
Results Results The comparison of response time, throughput, error rate f
or premium and basic clients with priority scheduling The comparison of performance in premium and in basic
clients • The premium rate is fixed• The premium request rate identical to the basic client
The quality of service in premium clients is better than in basic clients in above both case
The four Clients, The one Server, 100 based Network The httperf application is used by four clients
Summary Summary Contribution
• To motivate the need for Serer QoS to support tired user service level
• To protect servers from client demand overload• To develop architecture of WebQoS• To show the benefit of architecture through experiment
The unsolved problems• The tighter integration of server, network QoS and the ability to c
ommunicate QoS attribute across network• More Flexible admission control mechanisms• Lightweight signalling mechanisms for high priority traffic• What benefits can be obtained by the end to end QoS
Critique Critique The Strong points
• To show the Web bottleneck is the server side not network
• To present the architecture performing the differentiated Service and verify the availability of the architecture
• To combine other differentiated service approaches through showing architecture
The weak points• The architecture presented in this paper may be
deeply influenced by the status of the connection manager and may be bottleneck
• The experiment that is similar to real environment does not provide the effectiveness of the architecture presented enough.
top related