暨南大學 filer 教育訓練 simple fast reliable agenda ( 一 ) filer fundamental introduction 1...
TRANSCRIPT
暨南大學Filer 教育訓練
Simple
Fast
Reliable
Agenda
( 一 ) Filer Fundamental Introduction
1 。 NetApp Storage Introduction
2 。 NetApp Storage OverView
( Break Time)
3 。 FAS2040 Spec Introduction
4 。 FAS2040 Fundamental Setup
( Break Time)
( 二 ) Filer Configuration
1 。 NetApp Fileview Management
( Break Time )
Filer Fundamental Introduction
( 一 ) Filer Fundamental Introduction
1 。 NetApp Storage Introduction
2 。 NetApp Storage OverView
3 。 FAS2000 Spec Introduction
4 。 FAS2000 Fundamental Setup
Filer Fundamental Introduction
NetApp Storage Introduction
完全不需搬動資料的硬體擴充彈性
更換主機即可直接快速升級至更高階機種資料不受影響,不需透過其他儲存系統或磁帶系統的備份回復輔助
Gigabit Ethernet
Frontend: Multiple Ethernet channelBackend: Multiple FC-AL channel
系統設備升級永遠不需要資料轉換
不用耗時的資料轉移即可完成升級至更高階系統
FAS6040FAS6040840 TBFAS3070FAS3070
504TB
FAS2020FAS202065TB
16TB
FAS270FAS270
FAS6080FAS60801176TB
FAS3020FAS3020
84 TB
NetApp FAS
升級需耗時的資料轉移時間
EMC: DMXDMX3, CXDMX, CXDMX3 CXCX,CXCX3, AXCX3
IBM: DS DS
HDS: ThunderLightning, AMS USP, WMSAMS, WMSUSP
HP: EVAEVA, XPXP, EVAXP, MSAEVA, MSAXP
FAS6070FAS6070
Upgrade Paths
• Simple “head swap” • Flexible upgrade options• Zero data migration • Investment protection
• Simple “head swap” • Flexible upgrade options• Zero data migration • Investment protectionFAS270FAS270FAS270FAS270
FAS250FAS250
*
* Disk shelf conversion
4TB
504TB
FAS3000Series
FAS6000Series
NetApp Unified Storage可在任何 SAN 架構下運作
HOST LAYER
FABRIC LAYER
STORAGE LAYER
PROTOCOL TYPESCSI
Block
SCSI
Block
NFS CIFS
File File
Unified Storage Fabric Attached Storage
FC-SAN IP-SAN
FCP IP over GbE
FC DISK & S-ATA DISK
iSCSIFibreChannel Dedicated
Ethernet
Block File
CorporateLAN
Database, Email, AP Servers File SharingHome DirectoryWeb/Streaming
NFS CIFSDAFS
NFS CIFS
Unified Storage
IP StorageFC Storage
SAN ( Storage Area Network )
資料儲存必須依照特性選擇不同的存取模式
DAS SAN NAS 的比較
DAS SAN (Block) NAS (File)
Application Server
Application Server
FileSystem
FileSystem
NFS, CIFS
Application Server
FileSystem
RAID
SCSI, FCP
FC Switch &
Infrastructure
Ethernet Switch & Infrastructure
ORiSCSIFCP
EthernetSwitch
Application Server
Application Server
RAIDRAID
File System
兩種 SAN 的比較
DAS FC-SAN IP-SAN
Application Server
FileSystem
NFSCIFS
Application Server
FileSystem
RAID
SCSI, FCP
FCSwitch
iSCSIFCP
EthernetSwitch
Application Server
RAIDRAID
Virtualization
Application Server
FileSystem
Application Server
FileSystem
FCP
12
DepartmentalEnterprise
SANEnterprise
NASDepartmental
LANiSCSIFibreChannel
Dedicated
Ethernet
CIFS
NFS
HP EMC HDS
FAS2000 FAS3000 FAS6000 V-Series
Unified Multiprotocol Storage
Consolidate file and block workloads into a single system
NAS: CIFS, NFS
SAN: iSCSI, FCP
Adapt dynamically to performance and capacity changes
Multivendor storage support with V-Series
NetApp Unified Storage
Corporate Data Center Regional Office
NetAppFAS
Series
Home Dirs
CIFS
Exchange
LAN
Regional Data Center
WindowsServers
Exchange &SQL Server
iSCSI
LAN
UNIX®
Servers
Linux®
Servers
WindowsServers
Exchange
CRM
ERP
NetAppFAS
Series
Home Dirs /Network Shares
SQL Server
Windows®
Servers
iSCSI
CIFS, NFS
LAN
FC SAN WAN
LAN
Tape Library
NetAppFAS
Series
Filer Fundamental Introduction
NetApp Storage OverView
NetApp Storage OverView
Raid_DP
FlexVol
NVRAM
SnapShot
SnapRestore
Flex Clone
SyncMirror
Cluster
RAID Level 0 10+11+0
4 5 6WAFL
RAID-DP
經濟 - 以最低的成本提供資料安全保護
效能 - 讀寫效能皆最高
擴充 - 不須等待隨時動態擴增一顆或多顆
安全 - 保護任何一顆硬碟故障的資料安全
重建期間的安全 - 保護 * 任意 * 兩顆硬碟故障
經濟 + 效能 + 擴充 + 安全 + 重建期間的安全
各種 RAID 的特性
NetApp RAID-DP 的優勢
0%0%
5%5%
10%10%
15%15%
20%20%
Up to 5% Up to 2.6% Up to 17.9%
FCATA
FCATA
FC
ATA
17.9%
1.7%2.6%
.2%
5%3%
ATA
小於 10 億分之 1
*RAID-DP 在資料硬碟重建期間又發生 2 個磁區錯誤而導致資料流
失的機率( 以 16顆硬碟組成
RAID為例 )
Protected withRAID-DP
.0000000001%
平均每年硬碟的故障率
* 硬碟發生磁區錯誤的機率( 以 300GB FC /
320GB SATA為例 )
*RAID3/4/5 在磁碟重建期間發生磁區錯誤導致資料流失的機率
( 以 8顆硬碟組成RAID為例 )
RAIDRAID 重建期間資料仍有安全保護重建期間資料仍有安全保護
*Source: Network Appliance
FlexVol
FlexVol
App 2App 1App 3
App 3App 2App 1
提高存取效能
NetApp內部虛擬化讓效能大幅增加
傳統作法業界最佳標竿
即時動態線上檔案系統擴充能力
屬於 Dynamic Online File System Growth最少可一次只增加 ( 減少 ) 數 MB 的容量最多可一次增加數 ( 減少 )TB 的容量 不需等待就能立刻使用新增的容量不影響系統運作及效能不需重建檔案系統對 Unix 而言 -- 不需更動 mounting point 的設定對 Windows 而言 -- 不需更動網路磁碟的設定
即時動態線上檔案數量上限擴充能力
在檔案系統容量不變的前提下,可隨時增加 inode 數量 ( 可容納的最大檔案數量 )
避免因檔案數量達到 volume 的系統上限時,即使仍有剩餘儲存空間,也無法再存入檔案了完全不影響系統運作
不搬動區塊的快照功能空間最省效率最高, 每個 volume 都有 255 份的快照備份
快照備份區有 RAID的保護一顆硬碟故障不會導致資料流失
系統管理人員可以隨時刪除任一快照時間的備份資料而不影響其他快照備份的內容依據 < 時 >,< 天 >,< 週 > 為週期混合使用進行快照或隨時依需要進行快照隨時動態調整快照空間所佔比率 (0-50%)
調整時不會失去原有快照備份內容超過設定保留空間上限時會發出警告,仍可正常進行快照,不會覆蓋原有快照內容
Snapshot
NetApp Storage OverView
SnapRestore
112/04/19
® 可數秒內瞬間回復整個檔案系統,不限容量大小
Snapshot.0
File: SOME.TXT
C´A B C
Active File System
File: SOME.TXT
Disk blocks
因有 Snapshot.0的快照備份 , App的損毀只影響 C’區塊
SnapRestore
112/04/19
®
可數秒內瞬間回復整個檔案系統,不限容量大小Active File System
File: SOME.TXT
A B C
Disk blocks
使用單一指令即可瞬間將整個檔案系統 (或單一檔案 )回復到某個快
照時間點的備份
SnapRestore
NetApp Storage OverView
FlexClone
基於 Snapshot 技術發展的 Flex Clone
Parent
Aggregate
snapshot
Clone
FlexVol
FlexVol Clone
A-SIS
Data
General data in flexible volume
Meta Data
DeduplicationProcess
Deduped data in flexible volume
Deduped (Single Instance) Storage
Meta Data
FAS Deduplication: Function
FAS Deduplication: Commands‘sis’ == single instance storage command
License it
Turn it on
[Deduplicate existing data]
Schedule when to deduplicate or run manually
Check out what’s happening
See the savings!
license add <dedup_license>
sis on <vol>
sis start -s <vol>
sis config [-s schedule] <vol>
sis start <vol>
sis status [-l] <vol>
df –s <vol>
Path State Status Progress
/vol/vol5 Enabled Active 40MB (20%) done
Path State Status Progress
/vol/vol5 Enabled Active 30MB Verified
OR
/vol/vol5 Enabled Active 10% Merged
netapp1> sis status
Path State Status Progress
/vol/vol5 Enabled Active 25 MB Scanned
Path State Status Progress
/vol/vol5 Enabled Active 25 MB Searched
Gathering
Sorting
Deduplicating
Checking
FAS Deduplication:“df –s” Progress Messages and Stages
Completenetapp1> df –s /vol/vol5
Filesystem used saved %saved
/vol/vol5/ 24072140 9316052 28%
Typical Space Savings Results
in archival and primary storage environments:Video Surveillance 1%PACS 5%Movies 7%Email Archive 8%ISOs and PSTs 16%Oil & Gas 30%Web & MS Office 30 - 45%Home Dirs 30 - 50%Software Archives 48%Tech Pubs archive 52%SQL 70%VMWARE 40~60%
In data backup environments, space savings can be much higher. For instance, tests with Commvault Galaxy provided a 20:1 space reduction over time, assuming daily full backups with 2% daily file modification rate. (Reference: http://www.netapp.com/news/press/news_rel_20070515)
Stretch MetroClusterCampus DR Protection
LAN
ReplicatedData
A A
B B
Site #1 Site #2
300 meters distance
What: Replicate synchronously Upon disaster, fail over to partner filer at remote site to access replicated data
Benefits No single point of failure No data loss Fast data recovery
Limitations Distance
YX
X-mirrorY-mirror
FC
MetroCluster is a unique, cost-effective synchronous replication solution for combined high availability and disaster recovery within a campus or metro area
Fabric MetroClusterMetropolitan DR Protection
BuildingA
BuildingB
LAN
Cluster interconnect
dark fibre
BenefitsDisaster protectionComplete redundancyUp-to-date mirrorSite failover
vol Y’vol X vol Yvol X’
up to 100km
Fabric MetroCluster (2)Metropolitan DR Protection
BuildingA
BuildingB
LAN
Cluster interconnect
dark fibre
BenefitsDisaster protectionComplete redundancyUp-to-date mirrorSite failover
vol Y’vol X vol Yvol X’
DWDM DWDM
up to 100km
Disaster Protection Scenarios
Within DataCenter
Campus distancesWAN Distances
Primary Data Center
Local High Availability• Component failures• Single system failures
Campus Protection• Human Error• HVAC failures• Power failures• Building Fire• Architectural failures• Planned Maintenance
downtime.
• Electric grid failures• Natural disasters
Floods Hurricanes Earth Quake
Regional Disaster Protection
NetApp DR Solutions Portfolio
• High system protection
Within DataCenter
MetroCluster (Stretch)
Campus distances
Async SnapMirror• Most cost effective with
RPO from 10 min. to 1 day
MetroCluster (Fabric)• Cost effective zero RPO
protection
Sync SnapMirror• Most robust zero RPO
protection
WAN Distances
Primary Data Center
Clustered Failover (CFO) • Cost effective zero
RPO protection
Disk Failure Protection Solution Portfolio
Classes of Failure Scenarios
RAID 4SyncMirror
Checksums
RAID 4
RAID DP +SyncMirror
Any 5 Disks 1 Disk
IncreasingCost of
Protection RAID DP
Any 2 Disks Any 3 disks
NetApp Storage OverView
Cluster
Overview of High Availability
Cluster: A pair of standard NetApp controllers (nodes) that share access to separately owned sets of disks, in a shared-nothing architecture
Also referred to as redundant controllers
Logical configuration is active-active. A pseudo active-passive config is achieved by owning all disks under one controller (except for a boot disk set under the partner controller).
Dual-ported disks are cross-connected between controllers via independent Fibre Channel links
High speed interconnect between controllers acts as a “hearbeat” link and also path for NV-RAM mirror
Provides high availability in the presence of catastrophic hardware failures
FC
High Availability Architecture
Controller A Controller B
LAN/SAN
High speed Interconnect
Controller A Disks
Controller B Disks
FC
(shared nothing model)
Active path to owned disks
Standby path to partner disks
Mirrored NVRAM
Controller A Controller B
Mirrored NVRAMNVRAM NVRAM
When a client request is received• The controller logs it in its local NVRAM• The log entry is also synchronously copied to the partner’s NVRAM• Acknowledgement is returned to client
LAN/SAN
Failover mode
Controller A Controller B
During the failover process:
NVRAMVirtual instanceof Controller A
IP addr or WWPNof Controller A
• Virtual instance of partner is created on surviving node
• Partner’s IP addresses (or WWPNs) are set on standby NICs (or HBAs), or aliased on top of existing NICs (or HBAs).
• Surviving node takes over the partner’s disk and replays it’s intended NVRAM log entries
LAN/SAN
NVRAMX
Takeover and Giveback
Upon detection of failure, failover takes 40~120sec
On the takeover controller, data service is never impacted and is fully available during the entire failover operation.
On the failed controller, both takeover and giveback are nearly invisible to clients.
NFS cares only a little (typically stateless connections)CIFS cares more (connection-based, caches, etc.) Block protocols (FC, iSCSI): depends on the tolerance of the application
host HBAs are typically configured for (worst case) 2 minute timeout
Takeover is manual, automatic or negotiated
Giveback is manual, or automatic
High Availability Architecture
Components Required:
A second ControllerCluster interconnect kit4 Crossover FC cables2 cluster licenses
Cluster Interconnect Hardware
• Open Standards Infiniband link
• Fast (10 Gbit/sec)
• Redundant Connect
• Up to 200m between controllers
• Integrated into NVRAM5 card
Filer Fundamental Introduction
FAS2040A Spec Introduction
48NetApp Confidential - Internal Use only
FAS2040 Chassis Front Interface
Power
Fault
Controller A
Controller B
49 49
FAS2040 Chassis Rear Interface
FC 0a PortFC 0b Port
Console PortSAS 0d Port
BMC Mgmt 10/100 Ethernet Port
e0P Port (ACP)GigE e0a Port
GigE e0d Port
GigE e0c Port
GigE e0b Port
Power
PSU FaultPCM Fault
NVMEM Status
FAS2040 Specifications
Filer Specifications FAS2040
Max. Raw Capacity 136TB
Max. Number of Disk Drives
(Internal + External)
136
Max. Volume/Aggregate Size 16TB
ECC Memory 8GB
Ethernet 10/100/1000 Copper 8
Onboard Fibre Channel 4(1, 2, or 4Gb )
Filer Fundamental Introduction
FAS2040 Fundamental Setup
FAS2000 Fundamental Setup
FAS2000 Fundamental Setup
1 。 Setup
2 。 How to add disk to aggr
3 。 SnapShot & SnapRestore Demo
FAS2000 Fundamental Setup
Setup
How to add disk to aggr
Add Disk to Aggr0
How to add disk to aggr
NetApp Fileview Management
FileView
http://storage_ip/ na_admin/
NetApp Fileview Management
FileView
Q & A