Should I move my database to the cloud?

Download Should I move my database to the cloud?

Post on 06-Jan-2017

2.550 views

Category:

Technology

3 download

TRANSCRIPT

Title Slide No more than 2 lines

Should I move my database to the cloud?James SerraBig Data EvangelistMicrosoftJamesSerra3@gmail.com(On-prem vs IaaS VM vs SQL DB/DW)

So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said YES!, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!

2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/20171

About MeMicrosoft, Big Data EvangelistIn IT for 30 years, worked on many BI and DW projectsWorked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM architect, PDW/APS developerBeen perm employee, contractor, consultant, business ownerPresenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conferenceCertifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data Platform SolutionsBlog at JamesSerra.comFormer SQL Server MVPAuthor of book Reporting with Microsoft SQL Server 2012

Fluff, but point is I bring real work experience to the session

2

Should I move my database to the cloud?

Thank your for attending, please fill out the evaluation cards

3

AgendaSQL Server on-premSQL Server continuumSQL Server in an Azure VM (IaaS)Azure SQL Database (PaaS/DBaaS)Azure SQL Data Warehouse (PaaS/DBaaS)Summary

4

Benefits of the cloudAgilityGrow hardware as demand is needed (unlimited elastic scale). Change hardware instantlyReduce hardware as demand lessons or turn off if not used (pay for what you need)InnovationFire up a server quickly (abbreviated infrastructure implementation build-out times). Low barrier of entry and quicker Time to marketMake it easy to experiment, fail fastRiskAvailability - High availability and disaster recovery built-in or easy to implementReliability - Four nines SLA, storage durability, network redundancy, automatic geography redundancySecurity - The cloud datacenters have the ultimate in securityOtherCost savings: facility (co-location space, power, cooling, lights), hardware, software license, implementation, etcNo need to manage the hardware infrastructure, reallocating staffNo commitment or long-term vendor lockAllows companies to benefit from changes in the technology impacting the latest storage solutionsMore frequent updates to OS, sql server, etc, done for youReally helpful for proof-of-concept (POC) or development projects with a known lifespan

Four Reasons to Migrate Your SQL Server Databases to the Cloud: Security, Agility, Availability, and Reliability

Reasons not to move to the cloud:Security concerns (potential for compromised information, issues of privacy when data is stored on a public facility, might be more prone to outside security threats because its high-profile, some providers might not implement the same layers of protection you can achieve in-house)Lack of operational control: Lack of access to servers (i.e. say you are hacked and want to get to security and system log files; if something goes wrong you have no way of controlling how and when a response is carried out; the provider can update software, change configuration settings, and allocate resources without your input or your blessing; you must conform to the environment and standards implemented by the provider)Lack of ownership (an outside agency can get to data easier in the cloud data center that you dont own vs getting to datain your onsite location that you own. Or a concern that you share a cloud data center with other companies and someone from another company can be onsite near your servers)Compliance restrictionsRegulations (health, financial)Legal restrictions (i.e. data cant leave your country)Company policiesYou may be sharing resources on your server, as well as competing for system and network resourcesData getting stolen in-flight (i.e. from the cloud data center to the on-prem user)

5

Constraints of on-premise dataScale constrained to on-premise procurementCapex up-front costs, most companies instead prefer a yearly operating expense (OpEx)Astaff of employees or consultants must be retained to administer and support the hardware and software in placeExpertise needed for tuning and deploymentLack of room in the datacenter

6

Reasons not to move a database to cloudNo internet connection (deep mine) or slow internet connection (offshore oil rig)Millisecond performance required (servers in high-volume package plant)Applications will stay on-premLocked-in lease of datacenter with new equipmentLarge amount of on-prem born dataHuge migration effort for a short life span databaseExtremely sensitive data

This just means some databases should not be moved, but many others can!

7

Survey: cloud benefits8

What would it take to build my own DB?

9

SQL Server on-prem

10

Management studioSQL Server 2005Compression Policy-based mgmt. ProgrammabilityPowerPivot (In-Memory) SharePoint integration Master data services AlwaysOn In-Memory ColumnStore Data quality services Power View Cloud In-memory across workloads Performance & scale Hybrid cloud optimized HDInsight service Cloud BI Structured and unstructured data Built-in advanced analytics Rich visualizations on mobile devices Always Encrypted Stretch DB SQL Server 2008SQL Server 2008 R2SQL Server 2012SQL Server 2014SQL Server 2016Performance & productivityMission criticalSelf-service BICloud-readyMission critical & cloud performanceAdvanced analytics & rich visualizationsThe evolution of SQL Server

As you can see, SQL Server is not just a database. We have been adding capabilities across these three phases of the data lifecycle for years. Our engineering team continually aims to build new functionalities into the platform so customers dont have to acquire and stitch solutions together. Lets take in-memory technology as an example: We first introduced in-memory technology back in 2008 R2 and started improving analytics by building in-memory into PowerPivot to analyze millions of rows of data in Excel. Then in SQL Server 2012, we expanded our in-memory footprint by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds. With SQL Server 2014, we introduced an in-memory OLTP solution to significantly speed transactional performance. If youre running SQL Server 2005, you should start planning your upgrade before end of support hits next April. If you have an EA with Software Assurance, licensing costs for upgrade are included. After support ends: You will no longer receive security updatesMaintenance costs will increaseYou may encounter compliance concernsServer & Tools Business 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/201711

HPE Superdome X

Itcan handle up to384-cores and 24TB of memory! It use the HPE 3PAR StoreServ 8450 storage array which consists of192 SSD drives (480GB/drive) for a total of92TB ofdisk space.

http://www.jamesserra.com/archive/2016/02/hp-superdome-x-for-high-end-oltpdw/

As you can see, SQL Server is not just a database. We have been adding capabilities across these three phases of the data lifecycle for years. Our engineering team continually aims to build new functionalities into the platform so customers dont have to acquire and stitch solutions together. Lets take in-memory technology as an example: We first introduced in-memory technology back in 2008 R2 and started improving analytics by building in-memory into PowerPivot to analyze millions of rows of data in Excel. Then in SQL Server 2012, we expanded our in-memory footprint by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds. With SQL Server 2014, we introduced an in-memory OLTP solution to significantly speed transactional performance. If youre running SQL Server 2005, you should start planning your upgrade before end of support hits next April. If you have an EA with Software Assurance, licensing costs for upgrade are included. After support ends: You will no longer receive security updatesMaintenance costs will increaseYou may encounter compliance concernsServer & Tools Business 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/201712

Options for data warehouse solutions

Balancing flexibilityand choiceBy yourselfWith a reference architectureWith an appliance

Tuning and optimizationInstallationConfigurationTuning and optimizationInstallationConfigurationInstallationTuning and optimizationHIGH

LOWTime to solution

Optional, if you have hardware already

Existing or procured hardware and supportProcured software and supportOfferingsSQL Server 2014/2016Windows Server 2012 R2/2016System Center 2012 R2/2016OfferingsPrivate Cloud Fast Track Data Warehouse Fast TrackBuild or purchaseOfferingsAnalytics Platform SystemExisting or procured hardware and supportProcured software and supportProcured appliance and supportHIGH

Price

RAsSome assembly required OR No assembly required (through Partners)FT TrainingUnderstand File System layoutsHow you physically implement DW has to be like the guidelines

ApplianceMPP TrainingQueries are differentModeling is differentData structures are differentPartitioning is different

Key decision points:Data Volumes: If you get above 95 TerabytesProcurement and Operation Logistics: What is your HW SKU process? Can you put a brand new HW system onto the data center? Workload Characteristics: Highly concurrent data?DW Organizational Maturity: Do you already know MPP?

2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/201713

A workload-specific database system design and validation program for Microsoft partners and customers

Hardware system designTight specifications for servers, storage, and networkingResource balanced and validatedLatest-generation servers and storage, including solid-state disks (SSDs)

Database configurationWorkload-specificDatabase architectureSQL Server settingsWindows Server settingsPerformance guidanceSoftwareSQL Server 2016 EnterpriseWindows Server 2012 R2

Windows Server2012 R2SQL Server 2016ProcessorsNetworkingServersStorageData Warehouse Fast Track for SQL Server 2016 https://www.microsoft.com/en-us/cloud-platform/data-warehouse-fast-track

MICROSOFT CONFIDENTIAL (internal use only not customer ready)

2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/201714

Parallelism

SMP is one server where each CPU in the server shares the same memory, disk, and network controllers (scale-up). MPP means data is distributed among many independent servers running in parallel and is a shared-nothing architecture, where each server operates self-sufficiently and controls its own memory and disk (scale-out).

15

Microsoft Analytics Platform System

16

SQL Server Azure VMAzure SQL DBAzure SQL DWFast Track for SQL ServerAnalytics Platform SystemSQL Server 2016 + Superdome XAnalytics Platform SystemHadoopAzure Data Lake AnalyticsAzure Data Lake StoreRelational

Federated QueryPower BIAzure Machine LearningAzure Data FactoryNon-RelationalCloudOn-Premises

17

MICROSOFT DATA PLATFORMCLOUD

Stream AnalyticsAzure HDInsightAzure SQL Data WarehouseAzure Data FactoryAzure MLEvent HubDocumentDBTable StorageBlob StorageAzure Data Lake Store

Azure Data Catalog

Azure Search

Map Reduce, Pig, Hive, Hbase, Storm, Spark

Azure Data LakeAnalyticsMPP

Azure Marketplace

Redis Cache

SQL Server in Azure VMSQL Database + Elastic Scale

LRS Geo-Replication

DATA VISUALIZATION

iOS App

Android App

Windows Phone App

Analyze and Authoring

Microsoft ExcelPower BI Desktop

Mobile Report Publisher

Report BuilderReport Designer

Paginated ReportsAnalytical ReportsMobile Reports

DeliveryPower BI ServiceSQL Reporting Services

CloudOn-PremisesConsume

Power BI Web Portal

Reporting Services PortalSharePoint

Cortana

MICROSOFT CLOUD DATA PLATFORMCommon ToolsSQL Server Management StudioSQL Server Data ToolsCommand Line(PowerShell, BCP, SQLCMD)Migration & Upgrade Tools(SSMA, Upgrade Advisor, Map Toolkit)

BI and Advanced AnalyticsSQL Server Database EngineDatabase EngineHA/DRReference ArchitecturesAppliancesON PREMISESData WarehousingDimensional Modeling, Star, Snowflake, Polybase

Physical / Virtual Deployment

Relational, XML, JSON, Spatial, FullText, Binary, Image, FileTable, Filestream Columnstore Row Store In-Memory

APS Massively Parallel Processing

StretchDB Log ShippingAlways On Replication

Integration ServicesData Quality ServicesMaster Data Services Information Management & Data Orchestration Buffer Pool Ext. Query Store Resource GovernorRow level security, Transparent Data Encryption, Always Encrypted, Data Masking, Auditing, ComplianceSQL Agent, Database Mail, Linked Servers, Managed Backup, Backup to Azure

Analysis ServicesTabular, Multi-DimensionalPowerPivot, Data Mining, KPI, BISMReporting ServicesNative, SharePoint IntegratedR Services

FCIAG

PolybasePolybase

Transactional ReplicationAG -Async replica

SQL Server Continuum

SQL Server Continuum

MICROSOFT CONFIDENTIAL INTERNAL ONLY

Who manages what?Infrastructureas a ServiceStorageServersNetworkingO/SMiddlewareVirtualizationDataApplicationsRuntime

Managed by Microsoft

You scale, make resilient & managePlatformas a Service

Scale, Resilience and management by Microsoft

You manageStorageServersNetworkingO/SMiddlewareVirtualizationApplicationsRuntimeDataOn PremisesPhysical / VirtualYou scale, make resilient and manageStorageServersNetworkingO/SMiddlewareVirtualizationDataApplicationsRuntime

Softwareas a ServiceStorageServersNetworkingO/SMiddlewareVirtualizationApplicationsRuntimeDataScale, Resilience and management by Microsoft

Windows AzureVirtual MachinesWindows AzureCloud Services

21

One consistent platform with common tools.CloudOn-PremisesThe data platform continuumMicrosoft Azure Virtual MachinesSQL ServerMicrosoft AzureSQL DatabaseConsistent Tools

In this session we will take a closer look at the third pillar or key investment area for SQL Server 2014, which is build a data platform for hybrid cloud. One of the key design points we have taken to approaching cloud computing is to drive towards a consistent data platform with common tools from on-premises to cloud. When it comes to Microsofts cloud, we have two offerings that can be used to run relational databases in the cloud depending on your application needs. Lets take a closer look at both of these Microsoft Azure options. Microsoft Cloud OS 22

SQL Server in an Azure VM (IaaS)

Azure VMVM hosted on Microsoft Azure Infrastructure (IaaS)From Microsoft images (gallery) or your own images (custom) SQL 2008R2 / 2012 / 2014 / 2016 Web / Standard / Enterprise Images refreshed with latest version, SP, CUFast provisioning (~10 minutes). Provision groups of servers with resource templatesAccessible via RDP and PowershellFull compatibility with SQL Server Box software

Pay per usePer minute (only when running)Cost depends on size and licensingEA customers can use existing SQL licenses (BYOL)Network: only outgoing (not incoming)Storage: only used (not allocated)

Elasticity1 core / 2 GB mem / 1 TB 32 cores / 448 GB mem / 64 TB

MICROSOFT CONFIDENTIAL INTERNAL ONLY

Microsoft Azure VMs - PerformanceDifferent VM OptionsA-Series: Slowest CPU, least memory (A8-A11 compute intensive)Av2-Series: More memory, faster IOPSD-Series: Faster CPU, more memory. Non-persistent SSD drive (good for TempDB)Dv2-Series: 35% faster CPU than D-SeriesF-Series: Same performance as Dv2-Series but cheaperDS-Series: Same CPU and memory as D-Series. Support Premium Storage (good for Data, Log, and TempDB!!)DSv2-Series: Two additional sizesG-Series: Fastest CPU, most memoryGS-Series: Fastest CPU, most memory. Support Premium Storage G5: 32 cores, 448 GB mem, 6 TB SSD, 64 1TB disks - Biggest VM in the market

Different Storage OptionsStandard Storage: Low throughput (max 500 IOPs p/disk), High latency (avg 40ms), pay for used spacePremium Storage: High throughput (max 5000 IOPs p/disk), Low latency (avg 4ms), pay for allocated spaceAzure calculator: https://azure.microsoft.com/en-us/pricing/calculator/

https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-sizes

Note the IOPS for standard storage is the maximum rather than an expected number. For premium, IOPS are not just maximum but expected levels of performance.

Standard Disk Storage (HDD)1GB-1023GB, 500 IOPs, 60 MB/s throughput, $2/month per 100GB (locally redundant), $5/month per 100GB (geo-redundant), $6/month per 100GB (read-access geo-redundant)Premium Disk Storage (SSD)P10, 128GB, 500 IOPs, 100 MB/s throughput, $20/monthP20, 512GB, 2300 IOPs, 150 MB/s throughput, $73/monthP30, 1024GB, 5000 IOPs, 200 MB/s throughput, $135/month

2/24/2017 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.25Microsoft Consumer Channels and Central Marketing Group

VM Gallery Images via Azure MarketplaceCertified pre-configured software images (1250 on 2/23/2017)https://azure.microsoft.com/en-us/marketplace/virtual-machines/

Azure web portal -> New -> Marketplace (see all)

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM26

Azure Quickstart TemplatesFree community contributed templates (467 on 2/23/17)

https://azure.microsoft.com/en-us/documentation/templates/

https://azure.microsoft.com/en-us/documentation/templates/

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM27

Virtual Machine storage architectureC:\OS disk (127 GB)Usually 115 GB freeE:\, F:\, etc.Data disks (1 TB)

Attach SSD/HDD up to 1TB. These are .vhd filesD:\Temporary disk(Contents can be lost)

SSD/HDD and size depends on VM chosenDisk CacheAzure Blob storage

https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/

SSD/HDD storage included in A-series, D-series, and Dv2-series VMs is local temporary storage.

DS-series, G-series, GS-series SSD's have less local temporary storage due to storage used for caching purposes to ensure predictable levels of performance associated with premium storage.DS-series and GS-series support premium storage disks, which means you can attach SSD's to the VM (the other series support only standard storage disks).The pricing and billing meters for the DS sizes are the same as D-series and the GS sizes are the same as G-series.

When you create an Azure virtual machine, it has a disk for the operating system mapped to drive C (size is 127GB) that is on Blob storage and a local temporary disk mapped to drive D. You can choose standard disk type or premium (if DS-series or GS-series) for your local temporary disk - the size of which is based on the series you choose (i.e. A0 is 20GB). You can also attach new disks - specify standard or premium, for standard: specify size (1GB-1023GB), for premium: specify P10, P20, or P30. the disks are .vhd files that reside in an Azure storage account 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/201728

Azure Default Blob Storage

Azure Storage Page Blobs, 3 copiesStorage high durability built-in (like have RAID)VHD disks, up to 1 TB per disk (64 TB total)

29

Geo-storage replication

3 copies locally, another 3 copies in different regionDisable for SQL Server VM disk (consistent write order across multiple disks is not guaranteed). Instead use DR techniques in this deck

Defend against regional disastersGeo replication

http://www.jamesserra.com/archive/2015/11/redundancy-options-in-azure-blob-storage/

Geo-replication in Azure disks does not support the data file and log file of the same database to be stored on separate disks. GRS replicates changes on each disk independently and asynchronously. This mechanism guarantees the write order within a single disk on the geo-replicated copy, but not across geo-replicated copies of multiple disks. If you configure a database to store its data file and its log file on separate disks, the recovered disks after a disaster may contain a more up-to-date copy of the data file than the log file, which breaks the write-ahead log in SQL Server and the ACID properties of transactions. If you do not have the option to disable geo-replication on the storage account, you should keep all data and log files for a given database on the same disk. If you must use more than one disk due to the size of the database, you need to deploy one of the disaster recovery solutions listed above to ensure data redundancy. https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sql-high-availability-dr/ 30

Storage configuration

Automatically creates one Windows storage space (virtual drive) across all disks.

Up to 64 1TB disks for 64TB of drive space.

In the past, after provisioning a SQL Server VM, you had to manually attach and configure the right number of data disks to provide the desired number of IOPs or throughput (MB/s). Then you need to stripe your SQL files across the disks or create a Storage Pool to divide the IOPs or throughput across them. Finally, youd have to configure SQL Server according to the performance best practices for Azure VM.Weve now made this part of the provisioning experience. You can easily configure the desired IOPs, throughput, and storage size within the limits of the selected VM size, as well as the target workload to optimize for (online transaction processing or data warehousing). As you change the IOPs, throughput, and storage size, well automatically select the right number of disks to attach to the VM. During the VM provisioning, if more than one disk is required for your specified settings, well automatically create one Windows storage space (virtual drive) across all disks.

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM31

Azure Regions38 Regions Worldwide, 32 Generally Available100+ datacentersTop 3 networks in the world2.5x AWS, 7x Google DC RegionsG Series Largest VM in World, 32 cores, 448GB Ram, SSD

https://azure.microsoft.com/en-us/regions/

32

Migrating DataMigrate from on-prem SQL server to Azure VM IaaS: Use the Deploy a SQL Server Database to a Microsoft Azure VM wizard. Recommended method for migrating an on-premises user database when the compressed database backup file is less than 1 TB. Use on SQL Server 2005 or greater to SQL Server 2014 or greater Perform on-premises backup using compression and manually copy the backup file into the Azure virtual machine and then do a restore (only if you cannot use the above wizard or the database backup size is larger than 1 TB). Use on SQL Server 2005 or greater to SQL Server 2005 or greater Perform a backup to URL and restore into the Azure virtual machine from the URL. Use on SQL Server 2012 SP1 CU2 or greater to SQL Server 2012 SP1 CU2 or greater Detach and then copy the data and log files to Azure blob storage and then attach to SQL Server in Azure VM from URL. Use on SQL Server 2005 or greater to SQL Server 2014 or greater Convert on-premises physical machine to Hyper-V VHD, upload to Azure Blob storage, and then deploy as new VM using uploaded VHD. Use when bringing your own SQL Server license, when migrating a database that you will run on an older version of SQL Server, or when migrating system and user databases together as part of the migration of database dependent on other user databases and/or system databases. Use on SQL Server 2005 or greater to SQL Server 2005 or greater Ship hard drive using Windows Import/Export Service. Use when manual copy method is too slow, such as with very large databases. Use on SQL Server 2005 or greater to SQL Server 2005 or greater If you have an AlwaysOn deployment on-premises and want to minimize downtime, use the Add Azure Replica Wizard to create a replica in Azure and then failover, pointing users to the Azure database instance. Use on SQL Server 2012 or greater to SQL Server 2012 or greater If you do not have an AlwaysOn deployment on-premises and want to minimize downtime, use SQL Server transactional replication to configure the Azure SQL Server instance as a subscriber and then disable replication, pointing users to the Azure database instance. Use on SQL Server 2005 or greater to SQL Server 2005 or greater Others: data-tier application, transact-SQL scripts, sql server import and export wizard, SSIS, copy database wizard

https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-migrate-onpremises-database/

http://itproguru.com/expert/2015/03/how-to-move-or-migrate-sql-server-workload-to-azure-sql-database-cloud-services-or-azure-vm-all-version-of-sql-server-step-by-step/

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM33

Scale VMs

http://www.sqlservercentral.com/blogs/sqlsailorcom/2015/09/24/azure-virtual-machine-blog-series-changing-the-size-of-a-vm/

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM34

Scale VMs

Be aware a region may not support the VM sizeResize requires just a VM reboot if in same family or if Azure hardware cluster supports new VM sizeIf hardware cluster does not support new VM size: If using Resource Manager (ARM) deployment model you can resize VMs by first stopping your VM, selecting a new VM size and then restarting the VMIf using Classic (ASM) deployment model, VMs must be deleted and then recreated using the same OS and data disks. See PowerShell script

https://azure.microsoft.com/en-us/blog/resize-virtual-machines

https://buildwindows.wordpress.com/2015/10/11/azure-virtual-machine-resizing-consideration/

When a VM is running it is deployed to a physical server. The physical servers in Azure regions are grouped together in clusters of common physical hardware. A running VM can easily be resized to any VM size supported by the current cluster of hardware supporting the VM.

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM35

HA/DR deployment architectures

AlwaysOn Failover Cluster Instances (FCI)AlwaysOn Availability GroupsDatabase MirroringLog ShippingBackup to Azure (blob storage)Azure Site RecoveryAzure Only Availability replicas running across multiple datacenters in Azure VMs for disaster recovery.Cross-region solution protects against complete site outage.Hybrid Some availability replicas running in Azure VMs and other replicas running on-premises for cross-site disaster recovery. HA only, not DR FCI on a two-node WSFC running in Azure VMs with storage supported by a third-party clustering solution.FCI on a two-node WSFC running in Azure VMs with remote iSCSI Target shared block storage via ExpressRoute.

Azure OnlyPrincipal and mirror and servers running in different datacenters for disaster recovery. Principal, Mirror, and Witness run within same Azure data center, deployed using a DC or server certificates for HA.HybridOne partner running in an Azure VM and the other running on-premises for cross-site disaster recovery using server certificates. For DR only / Hybrid onlyOne server running in an Azure VM and the other running on-premises for cross-site disaster recovery.Log shipping depends on Windows file sharing, so a VPN connection between the Azure virtual network and the on-premises network is required.Requires AD deployment on DR site.On-prem or Azure production databases backed up directly to Azure blob storage for disaster recovery.

SQL 2016: Backup to Azure with file snapshotsSimpler BCDR storySite Recovery makes it easy to handle replication, failover and recovery for your on-premises workloads and applications (not data!).Flexible replication You can replicate on-premises servers, Hyper-V virtual machines, and VMware virtual machines. Eliminate the need for secondarySQL Server data files in AzureNative support for SQL Server data files stored as Azure blobs

https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-sql-dr/

It is up to you to ensure that your database system possesses the HADR capabilities that the service-level agreement (SLA) requires. The fact that Azure provides high availability mechanisms, such as service healing for cloud services and failure recovery detection for the Virtual Machines (https://azure.microsoft.com/en-us/blog/service-healing-auto-recovery-of-virtual-machines), does not itself guarantee you can meet the desired SLA. These mechanisms protect the high availability of the VMs but not the high availability of SQL Server running inside the VMs. It is possible for the SQL Server instance to fail while the VM is online and healthy. Moreover, even the high availability mechanisms provided by Azure allow for downtime of the VMs due to events such as recovery from software or hardware failures and operating system upgrades.36

SQL Server in Azure VM Best Practiceshttps://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-performance-best-practices/VM sizeDS3 or higher for SQL Enterprise editionDS2 or higher for SQL Standard and Web editionStorageUse Premium StorageKeep the storage account and SQL Server VM in the same regionDisable Azure geo-redundant storage (geo-replication) on the storage account (consistent write order across multiple disks is not guaranteed)DisksUse a minimum of 2 P30 disks (1 for log files; 1 for data files and TempDB)Avoid using operating system or temporary disks for database storage or loggingEnable read caching on the disk(s) hosting the data files and TempDBDo not enable caching on disk(s) hosting the log fileStripe multiple Azure data disks to get increased IO throughputFormat with documented allocation sizesI/OEnable database page compressionEnable instant file initialization for data filesLimit or disable autogrow on the databaseDisable autoshrink on the databaseMove all databases to data disks, including system databasesMove SQL Server error log and trace file directories to data disksSetup default backup and database file locationsEnable locked pagesApply SQL Server performance fixes

2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.2/24/2017 9:23 PM37

Azure SQL Database (PaaS/DBaaS)

SQL Database ServiceElastic scale & performanceBusiness continuity & data protectionFamiliar & self-managedNote: New features will be in SQL Database before SQL Server!Self-service restoreDisaster recoveryCompliance-enabledFamiliar & compatibleProgrammaticSelf-managedA relational database-as-a-service, fully managed by Microsoft. For cloud-designed apps when near-zero administration and enterprise-grade capabilities are key.Perfect for organizations looking to dramatically increase the DB:IT ratio.

Predictable performance levelsProgrammatic scale-outDashboard views of DB metrics

39

Azure SQL Database benefits*Data source & customer quotes: The Business Value of Microsoft Azure SQL Database Services, IDC, March 2015Now, those people can do development and create more revenue opportunities for us. Increased productivity47% staff hours reclaimed for other tasks

We can get things out faster with Azure SQL Database Faster time to market75% faster app deployment cycles

To be able to do what were doing in Azure, wed need an investment of millions. Lower TCO53% less expensive than on-prem/hosted

The last time we had downtime, a half a day probably lost us $100k Reduced risks71% fewer cases of unplanned downtime

OtherAzure SQL DatabaseDB management hours

42

Designed for predictable performanceAcross Basic, Standard, and Premium, each performance level is assigned a defined level of throughput Introducing the Database Transaction Unit (DTU) which represents database power and replaces hardware specs

RedefinedMeasure of power

% CPU% read% write% memory

Basic 5 DTUS0 10 DTUS1 20 DTUS2 50 DTUS3 100 DTUDTU is defined by the bounding box for the resources required by a database workload and measures power across the six performance levels.

P1 125 DTUP2 250 DTUP4 500 DTUP6 1,000 DTU P11 1,750 DTU P15 4,000 DTU

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers43

Scale DTUs

Changing the service tier and/or performance level of a database creates a replica of the original database at the new performance level, and then switches connections over to the replica. No data is lost during this process but during the brief moment when we switch over to the replica, connections to the database are disabled, so some transactions in flight may be rolled back. This window varies, but is on average under 4 seconds, and in more than 99% of cases is less than 30 seconds. If there are large numbers of transactions in flight at the moment connections are disabled, this window may be longer.

The duration of the entire scale-up process depends on both the size and service tier of the database before and after the change. For example, a 250 GB database that is changing to, from, or within a Standard service tier, should complete within 6 hours. For a database of the same size that is changing performance levels within the Premium service tier, it should complete within 3 hours.

44

Setup Disaster Recovery

45

Reads are completed at the primaryWrites are replicated to secondaries

Single logical database

WriteWriteAckAckReadvaluewriteAck

Critical capabilities:Create new replicaSynchronize data Stay consistentDetect failuresFailover99.99% availability

High-availability platformDBPSS

P

S

S

S

P

By storing your data in Azure SQL Database, you take advantage of many fault tolerance and secure infrastructure capabilities that you would otherwise have to design, acquire, implement, and manage. Azure SQL Database has a built-in high availability subsystem that protects your database from failures of individual servers and devices in a datacenter. Azure SQL Database maintains multiple copies of all data in different physical nodes located across fully independent physical sub-systems to mitigate outages due to failures of individual server components, such as hard drives, network interface adapters, or even entire servers. At any one time, three database replicas are runningone primary and two or more replicas. Data is written to the primary and one secondary replica using a quorum based commit scheme before the transaction is considered committed. If the hardware fails on the primary replica, Azure SQL Database detects the failure and fails over to the secondary replica. In case of a physical loss of a replica, a new replica is automatically created. So there are always at minimum two physical, transactionally consistent copies of your data in the datacenter.

46

Protect from data loss or corruptionAutomatic backupsSelf-service restoreTiered retention policy7 days Basic35 days Standard & PremiumWeekly backups up to 10 years (public preview)

Restore from backup

SQL Database Backups

sabcp01bl21

Azure Storage

sabcp01bl21Restore to point-in-time or to point-of-deletion

A self-service feature available for Basic, Standard, and Premium databases.Supports business continuity by recovering a database from a recent backup after accidental data corruption or deletion.

Automatic backupsFull backups weeklyDifferential backup dailyLog backups every 5 minutes Daily and weekly backups automatically uploaded to geo-redundant Azure Storage

Self-service restorePoint-in-time up to a second granularityREST API, Windows PowerShell, or PortalCreates a new database in the same logical server

Tiered retention policyBasic - 7 days Standard - 14 days Premium - 35 daysNo additional cost to retain backups

47

Restore from geo-redundant backups maintained in Azure StorageRestore to any Azure regionBuilt-in disaster recovery capability available for every database Geo-restore protects from disaster

SQL Database Backups

sabcp01bl21

Azure Storage

sabcp01bl21

Restore to any Azure region

Geo-redundant

With all tiers of SQL Database now supporting active geo-replication, do you still talk to customers about geo-restore? Or do you tell them to just use active geo-replication?

Bill Gibson: Both are valid. Geo replication still doubles the cost and for many customers is more than they want.

A little more on this. MYOB, our largest external user, relies on geo-restore exclusively at this point, as an example.Regarding doubling the cost, we still have a now-deprecated option, Standard Geo-Replication, intended for DR only. This option produces a single non-accessible secondary, which is only accessible after failover. A Standard geo-replication secondary is charged at 75% of the full database price. This option is being discontinued and being replaced by active geo-replication, which as you observe is now available on all editions, which, because it results in a readable secondary, is charged at 100%.

48

Active geo-replicationMission critical business continuity

Up to 4 secondaries

Service levelsBasic, Standard and PremiumSelf Service Readable SecondariesUp to 4Regions availableAny Azure regionReplicationAutomatic, AsynchronousManageability tools REST API, PowerShell or Azure Portal Recovery Time Objective (RTO)

Recommended

View more >