storageconfiguration_2009
Post on 14-Apr-2018
217 Views
Preview:
TRANSCRIPT
-
7/29/2019 StorageConfiguration_2009
1/75
Storage Performance on SQLServer
Joe Chang
-
7/29/2019 StorageConfiguration_2009
2/75
Coverage
Emphasis is on Line of Business DB
Different priorities for less critical apps
Performance
Fault-tolerance covered else where
-
7/29/2019 StorageConfiguration_2009
3/75
Overview
IO Performance Objectives
The Complete Storage Environment
Direct-Attach and SAN
Storage Components
Disk Performance
SQL Server IO CharacteristicsConfiguration Examples
SSD
-
7/29/2019 StorageConfiguration_2009
4/75
Old Rules
Meet transaction throughput
Disk Performance Criteria
Read from Data, Write to Logs
Separate Data and Log files?
Disk Queue Depth < 2 per disk
Prevalent use of SANLUNs with unknown number of disks
Latency (Avg Disk Sec/Read)
-
7/29/2019 StorageConfiguration_2009
5/75
Storage Performance Criteria
SELECT (Read) Query
Data must be read into buffer cache if notalready in cache read from data
INSERT/UPDATE/DELETE (Write) QueryData must be read into buffer cache
Transaction must be written to log
Buffer is marked as dirty, lazy writer handles
Large Query (as necessary)
Write and Read to tempdb
-
7/29/2019 StorageConfiguration_2009
6/75
Flashback: 1994 - 2009
1994: Pentium 100MHz
64MB, 4 x 16MB SIMM ($700+ each?)
OS + DB executable ~ 16-24MB
Net: 40MB Buffer cache
Difficult to support transactions
Reports run on 1st of month
Today: 4 x Quad Core
128GB, 32 x 4GB, $4800
3000 X increase in buffer cache
-
7/29/2019 StorageConfiguration_2009
7/75
Requirements Then and Now
Old: Support transactions
No longer really an issue for mostenvironments (after proper SQL tuning!)
Today: Minimize disruptions to transactionsLarge query or table scan while supportingtransactions
Checkpoint
write dirty buffers to dataTransaction Log backup
Backup & Restore
-
7/29/2019 StorageConfiguration_2009
8/75
Cost versus Value/Requirements
Money is no object:
With sufficient number of disks, IO channels,proper configuration
It is possible to avoid most disruptions
Otherwise Manage IO disruptions
Establish tolerable disruptions: 5-30 seconds?
Large reports run off-hours
Configure sufficient performance to handletransient events
-
7/29/2019 StorageConfiguration_2009
9/75
Complete Storage EnvironmentDirect Attach and SAN
-
7/29/2019 StorageConfiguration_2009
10/75
Most Common Mistakes
Storage sized to capacity requirements
2 HBA (or RAID Controllers)
Too few big capacity disk drives
Fill system PCI-E slots with controllers
Many small 15K drives (146 3.5 or 73 2.5)
-
7/29/2019 StorageConfiguration_2009
11/75
Direct Attach
System IO capabilities is distributed acrossmultiple PCI-E slots.
Single controller does not have sufficient IOSingle (or even Dual) SAS/FC port does nothave sufficient IO
Distribute IO over multiplePCI-E channelsControllers (SAS or FC)Dual port SAS or FC
Disk Array Enclosures (DAE)Do not daisy chain (shared SAS/FC) until allchannels are filled!
Server SystemCPU CPUCPU CPU
IO HUBPCI-E
PCI-E
HBA HBASAS S
AS S
AS S
AS
IO HUBPCI-E
PCI-E
HBA HBASASS
ASS
ASS
AS
-
7/29/2019 StorageConfiguration_2009
12/75
SAN
SAN is really computer system(s)Typically connected by FC to host andstorage
Can be fault-tolerant in all components andpaths: HBA, cables, switches, SP, disks
No special performance enhancementsSlight degradation (excessive layers)
Write cache is mirrored between SPs
Really important!Distribute load over all front-end and back-end FC ports
SANSP A SP B
HBA HBA HBA HBA
HBA HBA
Server SystemCPU CPUCPU CPU
IO HUBPCI-E
PCI-E
HBA HBA
IO HUBPCI-E
PCI-E
HBA HBA
FC
FC
FC
FC
FC
FC
FC
FC
HBA HBA
FC F
C
FC
FC
FC F
C
FC
FC
-
7/29/2019 StorageConfiguration_2009
13/75
Direct Attach & SAN
Direct Attach
RAID Controller in Server
Fault-tolerant disks,
sometimes controller/path, 2-node clusters
SAN
Host Bus Adapter, (switches)
Service Processor
Full component and path fault tolerance
Multi-node clusters
-
7/29/2019 StorageConfiguration_2009
14/75
SAN Vendor View
One immensely powerful SAN serving storage needs of all servers
Storage consolidation centralize management and minimize unusedspaceProblem is: SAN is not immensely powerful
What happens if LUN for another server fails, and a restore from backup isinitiated during busy hours
DW-BIDB
Email
SAN
WebQADB
OLTPDB
SharePoint
Switch
-
7/29/2019 StorageConfiguration_2009
15/75
Proper View
Nothing should disrupt the operation of a line-of-business server
Data Warehouse is not be mixed with transaction processing DB
Consider multiple storage systems for very large IOPS loadsinstead of a single SAN
DW/BI
Storage Storage
Email
SAN
Sharepoint
FileServer
OLTP
SAN SAN
-
7/29/2019 StorageConfiguration_2009
16/75
Storage Systems
SANEntry
Mid range
Enterprise
DA High
Density
DirectAttach
HP MSA 2000, (Dell MD 3000)
EMC CLARiiON, HP EVA, NetApp FAS3100
EMC DMX, Hitachi, 3 PAR, FAS6000
HP MSA 50, 70, Dell MD 1120
HP MSA 60, Dell MD 1000
-
7/29/2019 StorageConfiguration_2009
17/75
EMC CLARiiON
-
7/29/2019 StorageConfiguration_2009
18/75
x8 CMI
LCCLCC
High-performance Flash drives
Spin Down
Low power SATA II drives
Adaptive Cooling
Virtual Provisioning= Capacity optimization
= Energy efficiency
Multi-core processors
Increased memory
64-bit FLARE
Up to 960 drives
= up to twice theperformance, scale
SPSSPS
Power Supply
Power Supply
IO Complex
iSCSI module
Fibre Channelmodule
iSCSI module
Fibre Channelmodule
Fibre Channelmodule
Fibre Channelmodule
CPU Module
Multi-Core Processors
Memory
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
IO Complex
iSCSI module
Fibre Channelmodule
iSCSI module
Fibre Channelmodule
Fibre Channelmodule
Fibre Channelmodule
CPU Module
Multi-Core Processors
Memory
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
-
7/29/2019 StorageConfiguration_2009
19/75
EMC DMX
-
7/29/2019 StorageConfiguration_2009
20/75
Cache
If system memory is 128GB
What you expect to find in 16GB SAN cache
That is not in the buffer cache?
Performance benchmarks
Most use direct attach storage
With SAN: cache disabled
Alternative: tiny read cache, almost all to write
-
7/29/2019 StorageConfiguration_2009
21/75
Complete Environment Summary
Server System
Memory Bandwidth
IO bandwidth, port, PCI-E slots
Pipes/channels from Server to Storage
Storage System
RAID controller, etc
Pipes to disk drives
Disk drives
If system memory is 128GB, what you expect to find in the
16GB SAN cache that is not in the buffer cache?
-
7/29/2019 StorageConfiguration_2009
22/75
Storage Components
-
7/29/2019 StorageConfiguration_2009
23/75
Storage Components/Interfaces
System IO
Disk Drives
HBA and RAID ControllerSAS (3Gbit/s going to 6), FC (4Gbit/s to 8)
Storage Enclosures (DAE)
Disk DrivesSAN Systems
SAN Switches
-
7/29/2019 StorageConfiguration_2009
24/75
Server Systems: PCI-E Gen 1
PCI-E Gen 1: 2.5Gbit/s per lane, bi-directional
Dell PowerEdge 2950 2 x8, 1 x4
Dell PowerEdge R900
4 x8, 3 x4 (shared)HP ProLiant DL385G5p 2 x8, 2 x4
HP ProLiant DL585G5 3 x8, 4 x4
HP ProLiant DL785G5
3 x16, 3 x8, 5 x4
Most PCI-E slots have dedicated bandwidth, some may beshared bandwidth (with expander chip)
-
7/29/2019 StorageConfiguration_2009
25/75
Server Systems: PCI-E Gen 2
PCI-E Gen 2: 5.0Gb/s per lane
x4: 2 GB/sec in each direction
Dell PowerEdge R710 2 x8, 2 x4
Dell PowerEdge R910(?)
HP ProLiant DL370G6 2 x16, 2 x8, 6 x4
Intel 5520 chipset: 36 PCI-E Gen 2 lanes, 1 ESI (x4)ProLiant ML/DL 370G6 has 2 5520 IOH devices
-
7/29/2019 StorageConfiguration_2009
26/75
Disk Drives
Rotational Speed 7200, 10K, 15K
Average Rotational latency 4, 3, 2 milli-sec
Average Seek Time
8.5, 4.7, 3.4ms (7200, 10K, 15K RPM)
2.5 in 15K 2.9 ms avg. seek
Average Random Access Time
Rotational + Seek + Transfer + Overhead
Native Command Queuing
-
7/29/2019 StorageConfiguration_2009
27/75
Disk Interfaces
SATA mostly 7200RPM
SATA disk can be used in SAS system
SATA Adapter cannot connect to SAS disk
SAS 15K
3.5 in LFF, 2.5in SFF
Currently 3 Gbits/sec, next gen: 6 Gb/s
FC typically in SAN
4 Gbit/s, next: 8 Gbit/s
-
7/29/2019 StorageConfiguration_2009
28/75
Disk Drives (3.5in, LFF)
95mm 84mm 65mm
7200RPM, 1TBBarracuda 12:
8.5ms, 125MB/sBarracuda LP95MB/s (5900)
10,000RPM, 5ms
End of life?
15,000RPM, 3.4ms146, 300, 450GB
167MB/sec
Lower RPM drives have higher bit density and larger platters contributing tovery low $/GB.
Desktop rated for 2 years @ 20% duty cycle, server for 5 years @ 100%
-
7/29/2019 StorageConfiguration_2009
29/75
Seagate Drives
15K.7
Cheetah 3.5in LFF drives15K.2 2.9/3.315K.4 36/73/146GB 3.5/4.0ms 95?15K.5 73/146/300GB 3.5/4.0ms 125-73
15K.6 146/300/450GB 3.4/3.9ms 171-112MB/sec15K.7 300/450/600GB
Savvio 15K.2
Barracuda ES
Savvio 10K.3
Savvio 2.5 in SFF drives15K.1 36/72GB 2.9/3.3 ms 112-79MB/sec15K.2 73/146GB 2.9/3.3 ms 160-120MB/s
-
7/29/2019 StorageConfiguration_2009
30/75
Dell PowerVault
Dell PowerVault MD 1000
15 3.5in
$7K for 15 x 146GB 15K drives
Dell PowerVault MD 1120
24 2.5in$11K for 24 x 73GB 15K
-
7/29/2019 StorageConfiguration_2009
31/75
HP MSA
MSA 60: 12 LFF drives
MSA 70: 25 SFF drives
-
7/29/2019 StorageConfiguration_2009
32/75
Direct Attach Cluster Capable
Dell PowerVault MD 3000 15 3.5in
2 internal dual-port RAID controllers
$11.5K for 15 x 146G 15K drives
Listed as Direct Attach, but essentially anentry SAN
-
7/29/2019 StorageConfiguration_2009
33/75
PCI-E SAS RAID Controllers
First Generation
PCI-E host interface
PCI-X SAS controller
PCI-E to PCI-X bridge
800MB/sec
Second Generation
Native PCI-E to SAS
1.6GB/sec in x8 PCI-E, 2 x4 SAS ports
-
7/29/2019 StorageConfiguration_2009
34/75
FC HBA
QLogic QLE2562
Dual port 8Gbs FC, x8 PCI-E Gen 2
QLogic QLE 2462
Dual Port 4Gbs, x4 PCI-E Gen 1
Qlogic QLE 2464
Quad port FC, x8 PCI-E Gen 1
Emulex LPe12002
Emulex LPe11002/11004
-
7/29/2019 StorageConfiguration_2009
35/75
Disk Performance
-
7/29/2019 StorageConfiguration_2009
36/75
Random IO Theory Queue Depth 1
10K 3.0 4.7 0.07 7.77 128.7
15K 2.0 3.4 0.05 5.45 183.6
15K SFF 2.0 2.9 0.05 4.95 202
7200 4.17 8.5 0.06 12.7 78.6
Drive RotationalLatency
AvgSeek
8KBtransfer
Totalmilli-sec
IOPS
IO rate based on data distributed over entire diskaccessed at random, one IO command issued at a time
Not accounting for other delays
-
7/29/2019 StorageConfiguration_2009
37/75
Other Factors
Short Stroke:Data is distributed over a fraction of the entire disk
Average seek time is lower (track-to-track minimum)
Command Queuing:
More than one IO issued at a time,
Disk can reorder individual IO accesses, lowering accesstime per IO
-
7/29/2019 StorageConfiguration_2009
38/75
8K Random IOPS vs Utilization
0
100
200
300
400
500
600
88% 47% 24% 12% 6.1% 3.0% 1.4%
Q1 Q2 Q4 Q8
Q16 Q32 Q64
IOPS for range of Queue depth and space utilization
-
7/29/2019 StorageConfiguration_2009
39/75
Latency versus Queue Depth
0
20
40
60
80
100
120
140
160
180
Q1 Q2 Q4 Q8 Q16 Q32 Q64
88% 47% 24% 12% 6.1% 3.0% 1.4%
Latency versus Queue depth for range of space utilization
-
7/29/2019 StorageConfiguration_2009
40/75
Disk Summary
Frequently cited rules for random IO
Applies to Queue Depth 1
Data spread across entire disk
Key Factor
Short-stroke
High-Queue Depth
SAN
Complex SAN may hide SS and HQ behavior
-
7/29/2019 StorageConfiguration_2009
41/75
SQL Server IO Patterns
-
7/29/2019 StorageConfiguration_2009
42/75
SQL Server IO
Transactional queries
Read/Write
Reporting / DW queries
Checkpoints
T-Log backups
Differential/Full backups
-
7/29/2019 StorageConfiguration_2009
43/75
Transactional Query
Few rows involved
SELECT xx FROM Table WHERE Col1 = yy
Execution Plan has bookmark lookup or
loop joins
IO for data not in buffer cache
8KB, random
issued 1 at a time, serially (5ms min latency)
(up to around 24-26 rows)
Even if LUN has many disks, IO depth is 1!
-
7/29/2019 StorageConfiguration_2009
44/75
Large Query
Plan has bookmark lookup or loop join
Uses Scatter-Gather IO
More than (approximately) 30 rows
Depending on Standard or Enterprise Edition
Multiple IO issued with one call,
Generates high-queue depth
Query for 100 rows can run faster than 20!
High row count non-clustered index seek: Are key lookups reallyrandom. Build index with care. Only highly selective SARG in key.
-
7/29/2019 StorageConfiguration_2009
45/75
Tempdb
Large Query may to spool intermediateresults to tempdb
Sequence of events is:
Read from data
Write to tempdb
Read from tempdb (sometimes)
Repeat
Disk load is not temporally uniform!
Data and tempdb should share common pool
of Disks/LUNs
C
-
7/29/2019 StorageConfiguration_2009
46/75
Checkpoint
Dirty data buffers written to disk
User does not wait on data write
SQL Server should throttle checkpointwrites
But high-queue depth of writes may resultin high-latency reads
L B k
-
7/29/2019 StorageConfiguration_2009
47/75
Log Backup
Disrupts sequential log writes
U d t
-
7/29/2019 StorageConfiguration_2009
48/75
Update
Problem in SQL Server 2000
UPDATE uses non-clustered index
Plan does not factor in key lookups
Execution fetch one row at a time
~5-10ms per key lookup
-
7/29/2019 StorageConfiguration_2009
49/75
Storage ConfigurationExamples
G l St t Di t ib t IO
-
7/29/2019 StorageConfiguration_2009
50/75
General Strategy Distribute IO
Distribute IO across multiple PCI-E slots
Distribute IO across multipleHBA/Controllers
Distribute IO across many disk drives
Daisy chain DAE only after
High transaction (write) volume
Dedicate HBA/controller, SAN SP, disk drivesfor logs?
LFF SFF di k
-
7/29/2019 StorageConfiguration_2009
51/75
LFF or SFF disks
LFF 12-15 disks per enclosure
SFF 24-25 disks per enclosure
15 disks on x4 SAS,
Total bandwidth: 800MB/s,
53MB/s per disk
24 disks on x4 SAS, 33MB/s
Mi i f Li f B i
-
7/29/2019 StorageConfiguration_2009
52/75
Minimum for Line-of-Business
2 x Xeon 5500 or 5400 series64-72GB memory4 SAS RAID Controllers$11-13K
4 x 15 Disk Enclosures
60 146GB 15K drives6TB capacity (3+1 RAID 5)600GB database3GB/sec sequential30K IOPS short-stroke, peak
$28K
SQL Server Ent License$50K
12-15 disks per x4 SAS port800-1000MB/sec bandwidth
SAN Option: 2 dual-port FC HBAEMC CLARiiON CX2-240, 4 DAE
x4 or x8 PCI-E
I t di t
-
7/29/2019 StorageConfiguration_2009
53/75
Intermediate
1 DAE per controller in x4 PCI-E slots2 DAE per controller in x8 PCI-E slots,use both SAS ports, 1 DAE per x4 SASDaisy-chain DAE only for very high disks
SAN example: CLARiiON CX4-480, 3 dual-port,HBA6 DAE
x4 PCI-E x8 PCI-E x4 PCI-E x8 PCI-E
4 x Xeon 7400 series128GB memory4 SAS RAID Controllers$25K
6 LFF (3.5) Disk Enclosures
90 73GB 15K drives9TB capacity (3+1 RAID 5)900GB database3GB/sec+ sequential45K IOPS short-stroke, peak
$42K
SQL Server Ent License$100K
SFF di k f H R d IO
-
7/29/2019 StorageConfiguration_2009
54/75
SFF disks for Heavy Random IO
x4 PCI-E x8 PCI-E x4 PCI-E x8 PCI-E
4 x Xeon 7400 series128GB memory4 SAS RAID Controllers$25K
6 SFF (2.5in) Disk Enclosures
144 73GB 15K drives7TB capacity (3+1 RAID 5)700GB database3GB/sec+ sequential70K IOPS short-stroke, peak
$66K
SQL Server Ent License$100K
R ll S i DW
-
7/29/2019 StorageConfiguration_2009
55/75
Really Serious DW
8 x Opteron 8400 series246GB memory8 SAS RAID Controllers$80KOr Unisys, NEC, IBM
14 SFF (2.5in) Disk Enclosures336 73GB 15K drives16TB capacity (3+1 RAID 5)7-9GB/sec+ sequential
1.6TB database, 160K IOPS peak3.2TB, 130K IOPS peak$154K
SQL Server Ent License
$200K
Need lots of IO bandwidth and slots,more than 4-way Xeon 7400 serieswith 7300 chipset can handle
SAN CLARiiON l
-
7/29/2019 StorageConfiguration_2009
56/75
SAN CLARiiON example
Minimum (disks)
CX4-240, 2 dual-port FC HBA, 4 DAE
Intermediate (120 disks)
CX4-480, 4 dual-port FC HBA, 8 DAE
High-bandwidth DW (240 disks)
CX4-960, 2 quad, 4 dual-port FC HBA, 16 DAE
Very high random IO (480 disks)
CX4-960, 2 quad, 4 dual-port HBA, 32 DAE
-
7/29/2019 StorageConfiguration_2009
57/75
Storage PerformanceVerification
What To Test
-
7/29/2019 StorageConfiguration_2009
58/75
What To Test
Sequential
Random low queue, high queue
High row count Update with nonclusteredindex
Checkpoint writes
Full-stroke and Short-stroke
Cache Settings
-
7/29/2019 StorageConfiguration_2009
59/75
Cache Settings
Read
Read-Ahead, Adaptive Read-Ahead, None
Write
Write Back, Write Through
Read none or very small (2MB/LUN)
Write Write-Back
SAN HBA Settings
-
7/29/2019 StorageConfiguration_2009
60/75
SAN - HBA Settings
NumberOfRequests
Default 32? Prevents multiple hosts fromoverloading SAN
Match to number of disks to control queuedepth?
MaxSGList
-
7/29/2019 StorageConfiguration_2009
61/75
SSD
SSD Types
-
7/29/2019 StorageConfiguration_2009
62/75
SSD Types
DRAM
fastest, most expensive
NVRAM
SLC more expensive /GB, higher write
MLC - low cost per GB
Interfaces
SAS
PCI-E (Fusion-IO, 1GB/sec, 120K IOPS+)
Complete SAN (Texas Memory Systems)
SSD
-
7/29/2019 StorageConfiguration_2009
63/75
SSD
Intel X-25E, 32 & 64GB
Sequential Read 250MB/s, Write 170MB/s
Random Read: 35,000 IOPS @ 4KB
Random Write: 3,300 IOPS @ 4KB
Good but not spectacular
Latency: 75 us Read, 85 us Write
Really helpful for serial Queue Depth 1 accesses
SQL Server IO Cost Structure
-
7/29/2019 StorageConfiguration_2009
64/75
SQL Server IO Cost Structure
Key Lookup, Loop Join
4-5 micro-sec in-memory
15-25 us for 8K read from disk + eviction
45 us for 64K read due to cold cache
SSD and RAID
-
7/29/2019 StorageConfiguration_2009
65/75
SSD and RAID
Does an SSD need to be in RAID
Disk drive is fundamentally is single device
Motor or media failure results in loss of drive
SSD is not required to be a single device
Composed of SoC, interfaces SAS to NVRAM
Dual SoC plus ECC w/chip kill could make SSD
fault-tolerant
-
7/29/2019 StorageConfiguration_2009
66/75
Additional Slides
Partition Alignment
-
7/29/2019 StorageConfiguration_2009
67/75
Partition Alignment
http://blogs.msdn.com/jimmymay/default.aspx
Misaligned Theory
With 64K stripe, warm cache, 8KB IOon average every 8 random IO accesses willgenerate 10 actual IO, 25% gain
64K stripe, cold cache, 64KB IOEvery disk access generates 2 IO, 100% gain
RAID Theory
http://blogs.msdn.com/jimmymay/default.aspxhttp://blogs.msdn.com/jimmymay/default.aspxhttp://blogs.msdn.com/jimmymay/default.aspxhttp://blogs.msdn.com/jimmymay/default.aspx -
7/29/2019 StorageConfiguration_2009
68/75
RAID Theory
Operation RAID 0 RAID 1+0 RAID 5
Read 1 1 1
Small Write 1 1/2 1/4
Large Write 1 1/2 1 - 1/N
Theoretical performance per drive for N drives in a RAIDgroup
RAID 5 write: 1 read data, 1 read parity, 1 write data, 1 writeparity. Write penalty is reduced if entire stripe can be written
EMC CLARiiON
-
7/29/2019 StorageConfiguration_2009
69/75
EMC CLARiiON
CX4-120 CX4-240 CX4-480 CX4-960
SP CPU 1x1.2GHz DC 1x1.6GHz DC 1 2.2GHz DC 2x2.3GH QC
System memory 6GB 8GB 16GB 32GB
Memory per SP 3GB 4GB 8GB 16GB
Max cache 600MB 1.264GB 4.5GB 10.76GB
Max write cache 600MB 1.264GB 4.5GB 10.76GB
CMI X4 X4 X8
Front-End Base 4 FC + 4iSCSI
4 FC + 4iSCSI
8 FC + 4iSCSI
8 + 4
Back-end Base 2 FC 4 FC 8 FC 8FC
Max drives 120 240 480 480-960
Tot IO Slots 6 8 10 12
IO populated inbase
4 4 6 6
Front-end FC ports 12 12 16 24
Back-end FC 2 4 8 16
Max iSCSI 8 12 12 16
NetApp
-
7/29/2019 StorageConfiguration_2009
70/75
NetApp
Write Anywhere File Layout (WAFL)
Very different characteristics
Overrides many standard databasestrategies
No need to defragment
See NetApp specific documents
Index rebuild to clean up unused space maystill be helpful
Enterprise SAN
-
7/29/2019 StorageConfiguration_2009
71/75
Enterprise SAN
Massive cross-bar
RAID groups
RAID 5 3+1 or 7+1, RAID 10 2+2 or 4+4
Hyper Volume: 16GB slices from RAIDgroup
LUNS created from Hyper Volumes
Theory: Massive number disks, say 1000 disks, can do 150KIOPS. Each server averages 10K IOPS steady, with surgesto 50K. Many servers can share large SAN
Table Scan to Disk
-
7/29/2019 StorageConfiguration_2009
72/75
Table Scan to Disk
0
200
400
600
800
1,000
1,200
1,400
1,600
Default RowLock PagLock TabLock NoLock
MB/se
SQL 2000 Clust. Index Scan SQL 2000 Heap Table ScanSQL 2005 Clust. Index Scan SQL 2005 Heap Table
Low Queue Writes
-
7/29/2019 StorageConfiguration_2009
73/75
Low Queue Writes
Read activity
drops sharplyduringcheckpoints
4 15K SCSI
Updates All data in memory
-
7/29/2019 StorageConfiguration_2009
74/75
Updates All data in memory
Checkpoints
does not slowSQL batch, noreads required
HP Test System 2
-
7/29/2019 StorageConfiguration_2009
75/75
HP Test System 2
rx862016 Itanium 2
1.5GHz
HSV110 HSV110 HSV110 HSV110
8 2Gb/sFC ports 6 SCSI Disks
rx862016 Itanium 2
1.5GHz
HSV110 HSV110 HSV110 HSV110
8 2Gb/sFC ports 6 SCSI Disks
top related