Frank Floyd : Johnson & JohnsonITS Sr. Manager SDDC Global Core Infrastructure Services, Backup & Recoverability
Chirag Patel : VMWarePrincipal Architect
Data Archiving for VMware SDDC Using NetBackup: Learn from This Large Successful VMware SDDC Deployment Journey at a Large Pharmaceutical Company
VMworld 2017 Content: Not fo
r publication or distri
bution
• This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not been determined.
Disclaimer
2# PBO2794BU CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
Session Abstract
• Over past three years, Johnson & Johnson and VMware collaborated on the planning, design, deployment, and operationalization of a Software-Defined Data Center across the globe. The solution offers fully-automated virtual machine provisioning, using a wide range of technology and software including vRealize Suite (vRealize Automation, vRealize Orchestrator, vRealizeOperation Manager, vRealize LogInsight), VMware NSX, Flash Storage and High-Performance computing platforms. The SDDC environment is the foundation of a modernization strategy based on simplifying and automating server, storage, and networking infrastructure using software-defined technology to enable refresh initiatives without major downtime and eliminate “technology-debt”.
• This Large Pharmaceutical is deploying SDDC for their main datacenters, remote offices and DMZ environment. This session will cover technical deployment details, best practices and lessons learned from this implementation for data archiving using NetBackup
3# PBO2794BU CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
SDDC Current State – 2017
Business agility • Reduced to Hours for provision.
• Creation of “As a Service” model for demand agility.
• Easily Pilot new ideas.
• Add Resources when needed, scale when
required.
• Break the hold of Physical hardware restrictions.
Availability • DR services built into SDDC POD.
• Workload Resiliency included in SDDC
Architecture.
• No longer required to manage availability
at the physical layer.
Scalability• Rapid Scaling of resource
• No long require a PO to add Hardware.
Financials • Usage model for consumption
• Remove HW TLM as a business requirement
• Scalable SDDC POD deployments are scalable.
• Faster Time-to-Market
• Business can fail fast, iterate,
learn – Agile/DevOps
• Adapt to market disruptions
• Get Ahead of disruptions
• Deliver On-Premise Cloud
• Improved Reliability & Security
• Built-in workload management
and security dramatically
reducing alerts
• Built in diagnostics and
escalation
• Enhanced change isolation
• Reduced time to resolve failures
Business Benefits
• Weeks to provision infrastructure
• Multiple touch points and forms
to provision
• Costly infrastructure to operate
• Opaque usage and cost allocations
• Complex and slow to scale
• Limited high-availability
and disaster recovery
2014 – Enterprise
Business/Customer Outcomes: “Driving from Legacy to True SDDC Benefits”
4
VMworld 2017 Content: Not fo
r publication or distri
bution
StrategyPlanningRoadmap
IT Transformation
Global SDDC Rollout
Prove transformative power of SDDC Solutions
SDDC Strategy
• Viability of SDDC
• SDDC Business Case
• Establish SDDC Strategy
• Prove out SDDC “art of
the possible”
• Large ERP on SDDC PoC
• Target first major release
Initial SDDC DevOps Approach
• Built in “Fail fast” into DevOps Team
• Established SDDC Devops Leadership
• Built Executive transformation
sponsorship
• Partner with VMware Leadership
• Partnered with IT Leadership
• Created Dev/QA env for DevOps team
• Production SDDC rollout in Singapore
• Stress test of the SDDC environment
• Set Operational Goals and Business
Intent
• Establish Key milestones and Metrics
• Communicate to all Business units
and Customers
Global SDDC Implementation
• Agile, Agile, Agile
• Global deployment - 6 Sites WW
• Repeat Communication plan to
Customers
• VMware partnership and full
engagement
• Build out SDDC DevOps team and
support model
• Established customer DRI for workload
migrations
• 2015 : First DR/Colo Site in Malaysia &
First ERP app on SDDC
• Automation & Integration implementation
• Backup & Restore of entire SDDC stack
• Config./Design & Operations
Assessment
Q2 2014
2014
2015 - 2017
2017 - 2018
Global SDDC Expansion & Remote Sites
Deployment
• Automation & Integration Enhancements
(OS, DBaaS (SQL - Oracle), SAP BP)
• As of Aug 2017, ~ 18,678 VMs in SDDC
• Aggressive goals to migrate workloads
from Legacy
• Increased adoption of ERP applications
• Expansion of capabilities into SDDC DMZ
workloads
• Begin Rolling out SDDC at Remote Sites
• Global migrations complete to SDDC target
platform
• Achieve 40% Enterprise Application
Rationalization
SDDC Journey & Project Timeline
5
VMworld 2017 Content: Not fo
r publication or distri
bution
SDDC Environment with VMWare vRealize Suite
▪ vRO for cloud orchestration
▪ vRA for policy based governance and service delivery
Network ▪ NSX for vSphere
Security ▪ NSX for vSphere
Hypervisor ▪ ESX / vSphere
Storage ▪ API-Based Storage Virtualization
LAN▪ Spine, Border Leaf layout
Storage▪ Tiered Storage Arrays
OS layer
& above
▪ MS Windows Operating System
▪ Linux operating systems
Servers
Private cloud
services
Backup
▪ X86 2-socket CPU
▪ Enterprise Backup solution based on Virtualization
Management tools
▪ vRA Configuration and compliance management through workload automation
▪ vROps for unified performance, incident and capacity management
▪ Integration of vROps with IT Service Management tools (CMDB and Event Mangement platforms.)
▪ Integration with Identity and Access Management
▪ Financial management and cost transparency
▪ Enterprise Backup solutions utilizing vDP for data backup
▪ Disaster Recovery through SRM
▪ NSX for Software Defined Networking
Vir
tua
liza
tio
nP
hys
ica
l L
aye
r
SD
DC
sta
ck
▪ Establish Release Management standards and DevOps approach
▪ Separate Hardware and Software for agility and break legacy mindset
▪ Standardized Maintenance Windows for SDDC platform
▪ Increase level of availability and up-time
▪ Improve Storage and Backup OLA rates
Design Criteria
SDDC Technology Stack
6
VMworld 2017 Content: Not fo
r publication or distri
bution
Data Archiving Components
• How do we achieve data archiving
• Business requirements
– Reduce SLA time from 3 days to minutes for provisioning and decommissioning backups for servers
– Replicate data offsite
– Use VADP backups aggressively – (>65% of workloads are using VADP backups)
– Reduce the hardware stack to support a pod
• Original forecasts estimated 32 proxy servers
• Today we utilize 4 proxy servers, 2 ip media servers and 1 master server
– Achieve >13x deduplication ratios
• Tapeless target deduplication ratio when deployed in 2014/2015
• SDDC has achieved 35x deduplication ratio resulting in >97% reduction in data on storage
• 80 PB of logical data resides within SDDC, over 55% of our total data footprint
– Patch Backup server OS’s to the latest patch release from vendors
• This has proved challenging given the support matrix between RHEL, Veritas and VMware
• Multiple tests performed onsite and confirmed by engineering in order to support OS proliferations with VMware.
7
VMworld 2017 Content: Not fo
r publication or distri
bution
Original POD Design
SDDC R1 POD (EDC Regional Site)1000 VM Design - 2014 Hardware Environment
Servers and
Chassis
HP c7000 BladeSystem▪ HP BL460c Gen 8 blades (20 cores total)▪ Two 2.8 GHz 10 core 25 MB cache, 512GB
▪ 1 dual port 10 Gb Converged Network Adapter (CNA)
Two 10 Gb Ethernet Ports
▪ - Two dual port 16 Gb Fiber Channel HBA = Four 16
Gb FC Ports
▪ Cisco/HP B22 HP Edge Switch
Network
IP Network:
▪ Cisco Nexus 7xxx switches (Spine)
▪ Cisco Nexus 56xx (Access Leaf)
▪ Cisco Nexus 56xx (Border Leaf)
▪ Cisco/HP B22HP Switch Modules
SAN Network
▪ Brocade DCX Directors
Storage
Storage: (700TB)
▪ 3PAR 7400 (3 Tiers) Auto Tiering
▪ SDD, SAS and NL SAS
▪ Netapp NAS and Replication offsite.
Compute
Architecture
(Yellow servers)
▪ 32-Node VMware Clusters
▪ vCenter Servers
▪ Integration into Operations Software
Stacks
Backup /
Recovery
Netbackup Software
▪ Master/Media and VMproxy Backup
Tapeless:
▪ DD990 and Replication offsite.
SDDC
Management
(Red servers)
SDDC Management Stack ▪ ESXi and VCenter
▪ VCloud Suite Ent (VCAC Agents, vCO, vCOps)
▪ NSX and Log Insight
▪ Backup NBU Proxy servers /OpsCenter
▪ Storage Management Software
SDDC Network Core
Bunker Site
SDDC Compute, SDDC Management, Storage and Backup
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
OK
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
OK
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
OK
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
OK
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
HP ProLiantBL460c Gen8
UID
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
NODE 0 NODE 110
Replication over
IP WAN
MGMT
SERVICE
Active CP
USB
MGMT
SERVICE
Active CP
USB
CR16-8
4
5
6
7
12
13
14
15
0
1
2
3
8
9
10
11
0
1
2
3
8
9
10
11
4
5
6
7
12
13
14
15
CR16-8
4
5
6
7
12
13
14
15
0
1
2
3
8
9
10
11
0
1
2
3
8
9
10
11
4
5
6
7
12
13
14
15
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
MGMT
SERVICE
Active CP
USB
MGMT
SERVICE
Active CP
USB
CR16-8
4
5
6
7
12
13
14
15
0
1
2
3
8
9
10
11
0
1
2
3
8
9
10
11
4
5
6
7
12
13
14
15
CR16-8
4
5
6
7
12
13
14
15
0
1
2
3
8
9
10
11
0
1
2
3
8
9
10
11
4
5
6
7
12
13
14
15
A B
FAS8040FAS8040
A B
FAS8040FAS8040
4
3
2
1
FAN
STATUS ID
PSU FAN SUP IOM
Cisco Nexus 7004
ESD
Reset
Status
IDSys
tem
Activ
e PWR
MGM
T
Serial Port
CONSOLEACTLink
MGMT Ethernet Slot 0USB 1Eject
Request
Eject
Request
Backup
DD990CORE
SAN
& NAS
Storage
3PAR
Compute
BL460c
Storage
3PAR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
15
3
12
0
NODE 0 NODE 110
PSU FAN SUP FAB IOM
Cisco Nexus 7000 Series
1 2 3 4 5 6 7 8 9 10
SE
RIA
L P
OR
T
CO
NS
OL
EC
OM
1/AU
X
STATUSIO
SYSTEM
ACTIVE
PWR MGMT
N7K
-SU
P 1
US
B
DE
VIC
E P
OR
TH
OS
T P
OR
TS
RESET
CMPSTATUS
LINK
ACT
LINK
ACT
CM
P M
GM
TE
TH
SE
RIA
L P
OR
T
CO
NS
OL
EC
OM
1/AU
X
STATUSIO
SYSTEM
ACTIVE
PWR MGMT
N7K
-SU
P 1
US
B
DE
VIC
E P
OR
TH
OS
T P
OR
TS
RESET
CMPSTATUS
LINK
ACT
LINK
ACT
CM
P M
GM
TE
TH
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ID
N5K-C56-72UP
STAT
2
5 6 7 8
1 3 4 10
13 14 15 16
9 11 12 18
21 22 23 24
17 19 201 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4825 26 27 28 29 30 31 32
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
ID
N5K-C56-72UP
STAT
2
5 6 7 8
1 3 4 10
13 14 15 16
9 11 12 18
21 22 23 24
17 19 201 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4825 26 27 28 29 30 31 32
CORE
Network
IP
Cisco N7K
56128 &
5672
PSU FAN SUP FAB IOM
Cisco Nexus 7000 Series
1 2 3 4 5 6 7 8 9 10
SE
RIA
L P
OR
T
CO
NS
OL
EC
OM
1/AU
X
STATUSIO
SYSTEM
ACTIVE
PWR MGMT
N7K
-SU
P 1
US
B
DE
VIC
E P
OR
TH
OS
T P
OR
TS
RESET
CMPSTATUS
LINK
ACT
LINK
ACT
CM
P M
GM
TE
TH
SE
RIA
L P
OR
T
CO
NS
OL
EC
OM
1/AU
X
STATUSIO
SYSTEM
ACTIVE
PWR MGMT
N7K
-SU
P 1
US
B
DE
VIC
E P
OR
TH
OS
T P
OR
TS
RESET
CMPSTATUS
LINK
ACT
LINK
ACT
CM
P M
GM
TE
TH
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ST
AT
US
IO
PERFORMANCE PORT
13
57
42
PERFORMANCE PORT
91
11
31
5
12
14
16
10
PERFORMANCE PORT
17
19
21
23
20
22
18
PERFORMANCE PORT
25
27
29
31
28
30
32
26
N7K
-M132X
P-1
2
68
24
ID
N5K-C56-72UP
STAT
2
5 6 7 8
1 3 4 10
13 14 15 16
9 11 12 18
21 22 23 24
17 19 201 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4825 26 27 28 29 30 31 32
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
ID
N5K-C56-72UP
STAT
2
5 6 7 8
1 3 4 10
13 14 15 16
9 11 12 18
21 22 23 24
17 19 201 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 4825 26 27 28 29 30 31 32
CORE
Network
IP
Cisco N7K
56128 &
5672
Network CORE Compute and Management
Note: Additional rack may be required for infrastructure wiring or power considerations
NetBackup / Data Domain Components & Architecture
9
• Step 1 - Conduct NBU / Server Assessment• Step 2 - Network BW Requirements and Design• Step 3 - Deploy Network • Step 4 - Lab Setup • Step 5 - Implement (Phase 1)• Step 6 - Redirect legacy backups to DD• Step 7 - Configure and replicate to off site backups• Step 8 - Begin Decommission Preparation• Step 9 - Repeat for other sites (Phase 2/3)• Step 10 - Transition to Steady State
Backup/mediaserver
OnsiteRetentionStorage
Offsite Disaster
Recovery Storage
WAN
Data Domain
Clients Server Primarystorage
Current FC
FC or IP via
OST
VMworld 2017 Content: Not fo
r publication or distri
bution
Architecture and Scaling Considerations
• Challenges and tweaking we had to do to ensure we are backing up 100% of all workloads
• vCenter
– Increased CPU/Memory of vCenter to accommodate the number of requests/Snapshots being run
– Suggest keeping VMs to < 7,000 per vCenter
• NetBackup
– Leverage Resource Limits
– Use Accelerator where possible
– Monitor for hung query / processes
10
VMworld 2017 Content: Not fo
r publication or distri
bution
NetBackup / Veritas Considerations
11
Which Transport Mode to Use?
• NBD Transport
– Easiest and most commonly used
– Need to understand performance implications (VMkernel QoS)
– VMware’s VVOL may mandate this
• SAN Transport (Fibre and iSCSI)
– Lowest impact on ESXi
– Typically used on more important VM’s
– Explicit access to SAN & Datastore required
• HotAdd
– All backup processing load placed on ESXi hosts
– Additional HotAdd VM’s to install, maintain
– ESXi hosting HotAdd VM must have direct access to Datastores
– Makes sense in certain environments – e.g. remote office
VMworld 2017 Content: Not fo
r publication or distri
bution
NetBackup / Veritas Considerations
12
NBD Transport NetBackup VIP Resource Limits
• Set ESXi & Datastore Resource limits to 1 or 2
• Balance backups across ESXi hosts
• Minimize load on each ESXi host
• Maximize throughput to NetBackup media server
• Redo smaller at any given time
• I/O spread out across Datastores
• Less is more – backups are actually faster and more reliableVMworld 2017 Content: N
ot for publicatio
n or distribution
NetBackup / Veritas General Recommendations
13
• vCenter Resource Limits
– No higher than can be supported by NetBackup Media Servers
– Recommend no more than around 50
– This number dependent on vCenter version
• ESXserver = 2
– Spread load across vSphere environment
• Datastore = 2
– Datastore is almost certainly heavily impacted by VM activity
– Lower number improves snapshot reliability & lowers I/OVMworld 2017 Content: N
ot for publicatio
n or distribution
Operational Considerations
• Ensure capacity is available for DB API backups such as RMAN/SQL
– Two media servers is sufficient per POD
• Create standardized NetBackup Policy Names
– Enables scripts to be called that provisions a client to the necessary policies and NetBackup settings
• Provisioning went from 3 Days to minutes
– Clearing the host cache may be required if you have automated deletion of VMs that failed provisioning
• Train your partners on the modified process
– Your partners may become concerned as work they once did has become automated
– Show them the benefit and where they can focus their time
• Who wants agentless backups?
– Why maintain a NetBackup client on 65% of your workloads?
– VCO workflow created to add or remove the NBU software
• Allows the client footprint to be reduced by 65%
• Enables provisioning to be completed faster by not copying the NBU binaries and installing the software
14
VMworld 2017 Content: Not fo
r publication or distri
bution
Best Practices - Lessons Learned
• Stick to the design
– Failure to stick to the design will lead to capacity issues
• Who’s fault is that? It doesn’t matter
• Stay tightly integrated with the group performing automation enhancements
– Example: Moving workloads automatically between pods is great, but it overprovisions the backups in one environment if the backups don’t move also
• Keep processes consistent
– New provisioning works really well, Migrations are challenging
• Often running both a snapshot backup and traditional file level backup due to the VM being placed in the wrong folder
• Force standardization and reusing existing processes already developed(i.e. use the same process as a newly provisioned VM would use)
15
VMworld 2017 Content: Not fo
r publication or distri
bution
Best Practices - Lessons Learned
• Sufficient Datastore Space for snapshot
• VMware Tools and Symquiesce will conflict on Linux VMs
– Veritas and VMware have thoroughly tested and confirmed Symquiesce is no longer required
• VMware Tools stunning of the VM can cause VMs to hang
– Use right, latest and consistent version of VMware Tools
• Monitor for hung vms
– VMware Tools and other processes within OS trying to write to logs during quiescing
– Multiple balloon drivers on Linux VMs
16
VMworld 2017 Content: Not fo
r publication or distri
bution
Highly Suggested Session
• NET2866BU : Learn from challenging but successful NSX Deployment Journey with VMware SDDC at large Pharmaceutical Company
• NET2866BU : Learn from challenging but successful NSX Deployment Journey with VMware SDDC at large Pharmaceutical Company
• MGT2898PU : Pushing the Limits: Critical Customers Partnering with VMware Engineering
• DEV1519PU : DevOps in the Real World: Customer Panel
• NET1777BU : Troubleshooting Methodology for VMware NSX for vSphere
17
VMworld 2017 Content: Not fo
r publication or distri
bution