basiccapscalability.doc.doc.doc

23
Microsoft SQL Server 2000 Scalability Project – Basic Capacity Scalability Author: Man Xiong Microsoft Corporation July 2001 Summary: Microsoft® SQL Server™ 2000 running on Microsoft Windows® 2000 servers handles the largest mission-critical applications. This white paper is a joint effort by Microsoft and Dell™ to demonstrate the scalability of SQL Server 2000 and Dell hardware. SQL Server 2000 running on a Dell enterprise eight-way server can support multiple- terabyte databases and handle heavy workload and administrative tasks. SQL Server 2000 maximizes your return on investment in symmetric multiprocessing (SMP) systems, allowing users to add processors, memory, and disks to build large, centrally managed enterprise servers.

Upload: datacenters

Post on 20-Feb-2017

193 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: BasicCapScalability.doc.doc.doc

Microsoft SQL Server 2000 Scalability Project – Basic Capacity ScalabilityAuthor: Man XiongMicrosoft CorporationJuly 2001Summary: Microsoft® SQL Server™ 2000 running on Microsoft Windows® 2000 servers handles the largest mission-critical applications. This white paper is a joint effort by Microsoft and Dell™ to demonstrate the scalability of SQL Server 2000 and Dell hardware. SQL Server 2000 running on a Dell enterprise eight-way server can support multiple-terabyte databases and handle heavy workload and administrative tasks. SQL Server 2000 maximizes your return on investment in symmetric multiprocessing (SMP) systems, allowing users to add processors, memory, and disks to build large, centrally managed enterprise servers.

Page 2: BasicCapScalability.doc.doc.doc

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AS TO THE INFORMATION IN THIS DOCUMENT.Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. 2001 Microsoft Corporation. All rights reserved.Microsoft, Windows, and Win32 are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Page 3: BasicCapScalability.doc.doc.doc

Table of ContentsMicrosoft SQL Server 2000 Scalability Project – Basic Capacity Scalability

1Table of Contents 3

Introduction 5SQL Server Scalability 5Hardware Scalability 5

Computer Configuration 61 TB OLTP Database 7

OLTP Workload 7Index Reorganization 9

2 TB and 4 TB Decision Support Systems 9CREATE DATABASE Statement 9

Bulk Data Loading 10Index Creation 10DSS Workload 12Database Verification 13

Conclusion 14For More Information 15Appendix A: A Dell Solution for Scalable Enterprise Computing 16Appendix B: Test Configuration for the 1 TB OLTP Database 17Appendix C: Test Configuration for the  2 TB DSS Database 19Appendix D: Test Configuration for the  4 TB DSS Database 21

Page 4: BasicCapScalability.doc.doc.doc

IntroductionMany successful companies are upsizing their online applications as their businesses expand with the growth of e-commerce. Now that every Internet and intranet user is a potential client, database applications face enormous user and transaction loads. As a result, many companies are building systems that can support millions of customers and users. Database servers are at the core of these systems—often managing multiple terabytes (TB) of information. Scalable systems solve the upsizing problem by giving the designer a way to expand the network, servers, database, and applications by simply adding more hardware. Scalable computer systems can increase an application’s client base, database size, and throughput without application reprogramming.Well-known in the industry for its ease-of-use and self-tuning features, Microsoft® SQL Server™ 2000 manages large-scale servers as easily as smaller systems on a per-user basis. The tests described in this paper demonstrate the performance and scalability of SQL Server 2000 on very large databases (VLDB) without special tuning or unrealistic benchmark hardware configurations.

SQL Server ScalabilityMicrosoft SQL Server 2000 can both scale up on symmetric multiprocessing (SMP) systems and scale out on multiple-node clusters. This paper focuses on the scale-up scenario, demonstrating the scalability of a single SQL Server database running on a large SMP system, the Dell™ PowerEdge 8450.Tests were conducted using three scenarios: 1-TB online transaction processing (OLTP) database 2-TB decision support system (DSS) database 4-TB DSS databaseThe results demonstrate that SQL Server 2000 can manage very large (1 to 4 TB) OLTP and DSS databases by: Maximizing the utilization of additional hardware resources (processors and

memory) to provide predictable performance as the workload on the system increases.

Minimizing performance degradation while performing online management in the background.

Hardware ScalabilityThe hardware environment is one of the key considerations in achieving performance in scalable database environments. In these specific tests, a static hardware environment was maintained with an 8-processor Dell™ PowerEdge 8450 system with 5 TB of storage on 160 physical disks running on Microsoft Windows® 2000 Datacenter Server. For OLTP-centric solutions, rigorous testing and participation in industry-standard benchmarks has shown that the following hardware considerations can have an important effect on the performance of the application: Processors and memory

Database applications can be very processor and memory intensive depending on the type of database, the focus of the application, and usage patterns. To achieve

Page 5: BasicCapScalability.doc.doc.doc

optimal performance from the processor, testing has shown that not only is the speed of the processor important, but in scalable database environments, the size of the level 2 (L2) cache is equally important. If the application is configured to maximize the available processing power, then the more L2 cache available on the processor, the better the performance of the application. Additionally, if testing shows that the bottleneck in the application is in the available amount of memory, adding memory to the system can improve performance.

Disk subsystemAs database applications are scaled, I/O is a common bottleneck. In specific applications in which access to the application data is more random than sequential (as in OLTP), increasing the number of disk drives can mitigate an I/O bottleneck. This allows the application to spread the I/O load across multiple paths and puts the performance load on the system processor, as opposed to the disk subsystem. If the application data access is more sequential than random (as in DSS), additional drives will not provide as much performance benefit.

NetworkingOne of the key environmental considerations affecting performance is the bandwidth available over the network for the application. High-performance network protocols (such as the Virtual Interface Architecture–based SAN support included in SQL Server 2000) have been shown to improve performance by reducing networking bottlenecks in scalable database solutions.

For more information about the Dell products used in these tests, see Appendix A.

Computer ConfigurationThe tests were conducted using the following environment:Server

1 Dell PowerEdge 8450 8 Intel®  Pentium®  III Xeon™ 700 MHz processors 32 gigabytes (GB) of RAM

8 Qlogic QLA2200 PCI Host Bus Adapters4 Dell PowerVault™ 650F, each with 10, 18-GB, 10,000-RPM disks and 512-KB write-read cache12 Dell PowerVault 630F, each with 10, 18-GB, 10,000-RPM disksTotal Disk Space = 5 TB: (160) 18-GB, 10,000-RPM disks

Operating system Microsoft Windows 2000 Datacenter Server, Build 5.00.2195, Service Pack 1

Database serverMicrosoft SQL Server 2000 Enterprise Edition, Build 2000.80.194.0

Page 6: BasicCapScalability.doc.doc.doc

1-TB OLTP DatabaseIn this test, a representative OLTP database is generated. A transaction workload is simulated by software. The test application models a wholesale supplier managing orders and inventory. Transactions are provided to submit an order, process a payment, record deliveries, and check order status and stock levels. Although the underlying components and activities of this application are specific to this test, they are intended to be representative of typical OLTP systems. This test demonstrates that SQL Server 2000: Achieves excellent performance running a large OLTP workload. Performs online management work with minimal degradation to the transactional

workload.For information about the SQL Server database used for this test, see Appendix B.

OLTP WorkloadTo improve transaction throughput, one of the most common techniques is to add more CPUs and memory to the server. Adding more CPUs can help scale the system to handle more users or additional workload. OLTP workloads are often very I/O intensive due to simple transactions requiring random disk-reads and disk-writes. These workloads benefit from increased memory because a larger SQL Server buffer cache greatly reduces data and index page faults, thereby enhancing the workload performance.SQL Server 2000 on Windows 2000 Datacenter Server can manage up to 64 GB of memory by using the Address Windowing Extensions (AWE). Standard 32-bit addresses can map a maximum of 4 GB of memory. The standard address spaces of 32-bit Windows 2000 processes are therefore limited to 4 GB. By default, 2 GB is reserved for the operating system, and 2 GB is made available to the application. By specifying a /3GB switch in the Windows 2000 Datacenter Boot.ini file, the operating system reserves only 1 GB of the address space, and the application can access up to 3 GB. AWE is a set of extensions to the memory management functions of the Microsoft Win32® API that allows applications to address more memory than the 4 GB that is available through standard 32-bit addressing. AWE lets applications acquire physical memory as nonpaged memory and then dynamically map views of the nonpaged memory to the 32-bit address space. Although the 32-bit address space is limited to 4 GB, the nonpaged memory can be much larger. This enables memory-intensive applications that support AWE, such as SQL Server 2000 Enterprise Edition, to address more memory than can be supported in a 32-bit address space. On systems with 16 GB or more physical memory, it is recommended that the maximum memory for SQL Server be set to at least 1 GB less than total available physical memory, thereby leaving at least 1 GB for the operating system.

Test ResultsThe OLTP workload is run on systems with the following CPU and memory configurations: 2 CPUs, 4 GB total physical memory

Page 7: BasicCapScalability.doc.doc.doc

4 CPUs, 8 GB total physical memory 8 CPUs, 32 GB total physical memory As shown in Figure 1, OLTP workload performance improves when the number of CPUs is increased along with the memory. The increase in transaction throughput is nearly linear, demonstrating an excellent return on hardware investment.Figure 1

Additional tests with different combinations of CPUs and memory show a larger amount of memory helps only when there are available CPU cycles, as shown in Figure 2. For this database size and OLTP workload, using the full processing power and memory of this 8-way server significantly increases transaction throughput.Figure 2

Page 8: BasicCapScalability.doc.doc.doc

Index ReorganizationMicrosoft SQL Server 2000 includes online index reorganization, a new feature that is an effective mechanism for maintaining optimal performance despite the effects of index fragmentation common to OLTP databases. Because this feature can be used online, it supports today’s requirements for high availability, 24 hours a day, seven days a week.

Test ResultsDBCC INDEXDEFRAG using default parameters is run concurrently with the OLTP workload. The workload is first run normally at 50 percent CPU utilization to establish a baseline for transaction throughput. Next, an online reorganization is performed with the workload running. In this test, the online reorganization causes transaction throughput to degrade by only 1 percent.This test demonstrates that SQL Server 2000 can perform online reorganization with minimal transaction throughput degradation. Results vary with hardware configuration and characteristics of the OLTP workload.

2-TB and 4-TB Decision Support Systems In this test, a representative DSS database generation and workload software kit are used to simulate the data analysis requirements of a decision-support system processing business-oriented ad hoc queries. It models a decision-support system that loads data or refreshes data from an OLTP database and produces computed and refined data to support sound business decisions. Queries are provided to assist decision makers in five domains of business analysis: pricing and promotions, supply and demand management, profit and revenue management, customer satisfaction study, marketing share study, and shipping management. The underlying components and activities of this application are intended to be representative of a typical DSS system. This test demonstrates that SQL Server 2000: Achieves excellent performance for a standard DSS workload. Performs online management operations with minimal workload degradation. Scales predictably with increasing database size. For information about the database used for the 2-TB DSS test, see Appendix C. For information about the database used for the 4-TB DSS test, see Appendix D.

CREATE DATABASE StatementThe CREATE DATABASE statement is used to initialize the 2-TB and 4-TB databases and allocate the space for the databases.

Test ResultsScalability on the Size of the DatabaseThe test results demonstrate scalability in relation to the size of the database: The database loading time scales linearly when going from a 2-TB database to a 4-TB database. (The average throughput is 730 GB per hour for the 2-TB DSS and 722 GB per hour for the 4-TB DSS.)

Page 9: BasicCapScalability.doc.doc.doc

Bulk Data LoadingMicrosoft SQL Server 2000 supports bulk database loading from multiple data streams in parallel. This technology makes optimal use of multiple CPUs and available I/O bandwidth.

Test ResultsEffect of Multiple StreamsThe BULK INSERT statement in SQL Server 2000 is used to load data into the 2-TB and 4-TB DSS databases. As shown in Figure 3, data-loading throughput increases predictably with the number of input streams.Figure 3

Effect of Increased Data VolumeWhen going from a 2-TB database to a 4-TB database, the database loading time scales linearly.

Index CreationUsually, decision-support systems require very complex indexes to speed up query processing. In this test, indexes are built after the data is loaded.SQL Server 2000 uses two types of indexes: Clustered indexes Nonclustered indexes A clustered index determines the physical order of data in a table. In a nonclustered index, the data is stored in one place, the index in another, with pointers to the storage location of the data. The items in the index are stored in the order of the index key values, but the information in the table is stored in a different order (which can be dictated by a clustered index).

Test ResultsClustered vs. Nonclustered Index Creation TimesIndex creation time is tested by building a clustered index on column A of a 0.65-TB table and a nonclustered index on the same column. Creating the clustered index on

Page 10: BasicCapScalability.doc.doc.doc

column A takes 25 percent more time than creating the nonclustered index on the same column, primarily due to the additional time required to move the data into key order. Effect of Multiple CPUs on Index CreationA clustered index is built on column A of a 0.65-TB table and a nonclustered index on column B on the same table. All columns are of int data type. For testing purposes, the database with unsorted data is backed up once before building any of these indexes, and then this backup is restored prior to building the desired index.As shown in Figure 4, increasing the number of CPUs increases index creation throughput for all indexes. Again, throughput increases predictably with additional CPUs.Figure 4

Effect of Table Size on Index Creation TimeThe index creation time scaled linearly when going from a 0.65-TB table to a 1.30-TB table, demonstrating that SQL Server 2000 index creation scales predictably.Disk Space Required by Index CreationCreating a clustered index requires approximately 1.2 x data volume extra disk space beyond the size of the existing table data. When a clustered index is created, the table is copied, the data in the table is sorted, and then the original table is deleted. Therefore, enough empty space must exist in the database to hold a copy of the data, the B-tree structure, and some space for sorting. In this test, 0.85 TB extra disk space is needed for the data files to create a clustered index on a 0.7-TB table. The index creation was performed with the recovery mode of the database set to BULK_LOGGED. In this mode, the system requires log space sufficient to record the allocation of storage for the new clustered index and the drop of the original table. In this configuration, 50 GB of log space is adequate to create a clustered index on a 0.65-TB table.

DSS WorkloadThis ad hoc DSS workload simulates an environment in which the users connect to the system to submit queries that are ad hoc in nature. It is composed of complex, read-only queries, as well as inserts and deletes to simulate the refresh of the

Page 11: BasicCapScalability.doc.doc.doc

database, perhaps from a source OLTP system. To simulate a real-world DSS environment, the queries are submitted by multiple, concurrent user sessions. This requires SQL Server to balance system resource allocation among the simultaneously running OLTP and DSS queries, which is done without any human intervention.

Test ResultsEffect of the Number of CPUsA massive DSS workload is run on the 2-TB DSS database. In this test, eight osql connections are run, each executing a mix of over 20 representative queries, plus insert and delete activity. Figure 5 shows relative query performance as a function of number of CPUs. The results for 2 and 4 processors are given relative to the 8-processor result. This illustrates nearly perfect linear scaling of query performance with the number of CPUs using SQL Server 2000. Figure 5

Disk I/O Throughput The DSS queries often require range scans of tables and indexes involving sequential disk reads. SQL Server 2000 automatically uses the maximum available I/O capacity. Figure 6 shows the increase in disk-read throughput as a function of the number of CPUs. This illustrates that SQL Server uses processor power effectively to drive query processing and related I/O activity. Again, performance increases predictably as more CPUs are added.Figure 6

Page 12: BasicCapScalability.doc.doc.doc

tempdb Space RequiredAll DSS testing is done using a 300-GB tempdb database. This space is preallocated to prevent variations in the testing due to spontaneous automatic growth of tempdb.

Database VerificationDBCC CHECKDB performs an online analysis of the integrity of all the storage structures of a database. Online verification, like other maintenance operations in SQL Server, is designed to have minimal effect on the production workload.By default, DBCC CHECKDB performs a detailed check of all data structures. This checking is done in parallel to make effective use of large systems with many processors.Usually, DBCC CHECKDB should be used with the PHYSICAL_ONLY option to check the physical structure of the page and record headers. This operation performs a quick check designed to detect hardware-induced errors

Test ResultsEffect of the Size of the DatabaseExecution time of DBCC CHECKDB scales linearly when going from the 2-TB DSS database to the 4-TB DSS database, demonstrating predictable scaling of verification with database size.Minimal Workload Performance DegradationWhile DBCC CHECKDB is run with PHYSICAL _ONLY during a heavy concurrent DSS workload, only 3 percent performance degradation is measured on the workload.Estimating Temporary Space Required by Database VerificationDBCC CHECKDB requires additional space in tempdb. DBCC CHECKDB WITH ESTIMATE ONLY is run before DBCC CHECKDB to determine the tempdb space that is needed. Then sufficient physical disk space is allocated to tempdb. DBCC CHECKDB WITH ESTIMATE ONLY on the 2-TB DSS database takes four minutes and reports that 38.5 GB of space in tempdb is needed, very close to the actual space used (37 GB). The same command on the 4-TB DSS database takes eight minutes and reports that 81 GB of space in tempdb is needed, very close to the actual space used (83 GB). This testing illustrates that the estimation process is a fast and accurate way to predict temporary space requirements for database verification. In addition, the test shows that the time required to obtain the estimate is predictable based on database size.

ConclusionThe combination of Microsoft SQL Server 2000, Microsoft Windows 2000 Datacenter Server, and the Dell PowerEdge 8450 server scales predictably as database size increases, allowing you to deploy mission-critical VLDB systems confidently. Excellent performance is achieved at database sizes of 1, 2, and 4 TB. Heavy OLTP and DSS workloads are handled by the server with no manual tuning of SQL Server parameters. Normal management tasks are completed in the background during heavy workload tests with minimal degradation to production work.

Page 13: BasicCapScalability.doc.doc.doc

For More InformationFor more information about SQL Server scalability and hardware scalability, see these resources:SQL Server Home Page at http://www.microsoft.com/sqlSQL Server 2000 Benchmark Page at http://www.microsoft.com/sql/worldrecordDell Server Solutions at http://www.dell.com/us/en/biz/products/line_servers.htmDell Storage Solutions at http://www.dell.com/us/en/biz/products/line_storage.htmSQL Server on Dell at http://www.dell.com/SQLWindows 2000 on Dell at http://www.dell.com/Windows2000Dell High Availability Solutions at http://www.dell.com/HA

Page 14: BasicCapScalability.doc.doc.doc

Appendix A: A Dell Solution for Scalable Enterprise ComputingThe Dell PowerEdge 8450 is an ideal solution for Scalable Enterprise Computing (SEC) environments as it is capable of delivering high levels of scalable performance running solutions on Microsoft Windows 2000 Datacenter Server and Windows 2000 Advanced Server. The PowerEdge 8450 is designed to support enterprise applications and consolidate server resources in datacenter environments.

PowerEdge 8450Provides up to 8 Intel Pentium III Xeon processors at 700 MHz and 900 MHz (1-MB and 2-MB cache available) for ultimate scalability. Hot-swap drives, power supplies, cooling fans, and PCI slots help improve

reliability and performance. Four-peer PCI buses and 10 64-bit PCI slots provide outstanding I/O bandwidth. 30-day Getting Started Technical Support and 24x7 telephone and E-Support help

ensure successful transitions. Dell Open Manage server management software for ease of use.

PowerVault 650FThe PowerVault 650F offers a highly available, highly scalable, fibre channel RAID storage system with: Dual-active, redundant controllers for reliable RAID protection and exceptional

performance. Fully redundant, fibre channel architecture, which provides for no single point of

failure. Support for up to 10 Fibre Channel drives internally.

PowerVault 630FThe PowerVault 630F is an expansion enclosure for the PowerVault 650F offering: Redundant power supplies, fans, and link control cards provide additional

protection. 10 drives per enclosure. 11 expansion enclosures per array.

Page 15: BasicCapScalability.doc.doc.doc

Appendix B: Test Configuration for the 1-TB OLTP DatabaseThis appendix provides the configuration environment used for the tests run against the 1-TB OLTP database.

Disk ConfigurationThe following table shows the disk configuration used for this test. Both disk write cache and read cache are set to 249 MB.Logical drives

Number of physical disks

RAID configuration

Capacity (GB) Comment

F 14 1/0 320 For OLTP DB data file and backup space

G 12 1/0 320 For OLTP DB log fileH 12 1/0 320 For OLTP DB data file and

backup spaceI 14 1/0 320 For OLTP DB data file and

backup spaceJ 12 1/0 320 For OLTP DB data file and

backup spaceK 12 1/0 320 For OLTP DB data file and

backup spaceL 14 1/0 352 For OLTP DB data file and

backup spaceM 12 1/0 352 For OLTP DB data file and

backup spaceN 12 1/0 352 For OLTP DB data file and

backup spaceO 14 1/0 320 For OLTP DB data file and

backup spaceP 12 1/0 64 For OLTP DB data file and

backup spaceQ 12 1/0 320 For OLTP DB data file and

backup space

Database Space Configuration Data size: 1046 GB Index size: 244 GB Space allocated for data and indexes: 1500 GB Log space: 60 GB Total disk space used (backup excluded): 1560 GB

Page 16: BasicCapScalability.doc.doc.doc

Database Server Settings The lightweight pooling server option was used for the test. The affinity mask option was used to allocate appropriate number of processors

for SQL Server during CPU tests. Address Windowing Extensions (AWE) was enabled for SQL Server to use more

memory during memory tests. Max memory was set to leave 1 GB of memory to the operating system when

AWE was enabled. All other SQL Server configuration parameters were left at their default values.

Page 17: BasicCapScalability.doc.doc.doc

Appendix C: Test Configuration for the 2-TB DSS DatabaseThis appendix provides the configuration environment used for the tests run against the 2-TB DSS database.

Disk ConfigurationThe following table shows the disk configuration used for this test. Both disk write cache and read cache are set to 249 MB.Logical drives

Number of physical disks

RAID configuration

Capacity (GB) Comment

F 10 0 320 For DSSDSS DB data file and backup space

G 10 0 320 For DSS DB data file and backup space

H 10 0 320 For DSS DB data file and backup space

I 10 0 320 For DSS DB data file and backup space

J 10 0 320 For DSS DB data file and backup space

K 10 0 320 For DSS DB data file and backup space

L 10 0 352 For DSS DB data file and backup space

M 12 0 352 For tempdbN 12 0 352 For tempdbO 10 0 320 For tempdbP 4 1/0 64 For logs of DSS DB and

tempdbQ 10 0 320 For DSS DB data file and

backup spaceR 10 0 320 For DSS DB data file and

backup spaceS 10 0 320 For DSS DB data file and

backup spaceT 10 0 320 For DSS DB data file and

backup spaceU 10 0 320 For DSS DB data file and

backup space

Database Space Configuration Data size: 1096 GB

Page 18: BasicCapScalability.doc.doc.doc

Index size: 566 GB Space allocated for data and indexes: 2680 GB (extra space needed for clustered

index creation and sorting) Log space: 60 GB tempdb space: 360 GB

Database Server Settings The affinity mask option was used to allocate the number of processors for SQL

Server during CPU tests. All other SQL Server configuration parameters were left at their default values.

Page 19: BasicCapScalability.doc.doc.doc

Appendix D: Test Configuration for the 4-TB DSS DatabaseThis appendix provides the configuration environment used for the tests run against the 4-TB DSS database.

Disk ConfigurationThe following table shows the disk configuration used for this test. Disk read-cache and write-cache are set to 249 MB, except where noted in the results for this test described earlier in this paper.Logical drives

Number of physical disks

RAID configuration

Capacity (GB) Comment

F 10 0 320 For DSS DB data file and backup space

G 10 0 320 For DSS DB data file and backup space

H 10 0 320 For DSS DB data file and backup space

I 10 0 320 For DSS DB data file and backup space

J 10 0 320 For DSS DB data file and backup space

K 10 0 320 For DSS DB data file and backup space

L 10 0 352 For DSS DB data file and backup space

M 12 0 352 For tempdbN 12 0 352 For tempdbO 10 0 320 For tempdbP 6 1 192 For logs of DSS DB and

tempdbQ 10 0 320 For DSS DB data file and

backup spaceR 10 0 320 For DSS DB data file and

backup spaceS 10 0 320 For DSS DB data file and

backup spaceT 10 0 320 For DSS DB data file and

backup spaceU 10 0 320 For DSS DB data file and

backup space

Page 20: BasicCapScalability.doc.doc.doc

Database Space Configuration Data size: 2,356 GB Index size: 1,359 GB Space allocated for data and indexes: 4,096 GB Log space: 100 GB tempdb space: 800 GB

Database Server Settings Affinity mask was used to allocate appropriate number of processors for SQL

Server during CPU tests All other SQL Server configuration parameters were left at their default values