클라우드를통한 궁극의데이터파워확보...• dongah university conducted a poc to...

Post on 22-Aug-2020

1 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

문수영 이사

클라우드를 통한궁극의 데이터 파워 확보

Automobility Los Angeles Gartner TechRadar Earthdata Automobility Los Angeles

Total autonomy will only be 100% accident-free by testing a minimum of 10 billion miles.1

Autonomous vehicles will generate and consume roughly 4 terabytes of data a day by 2020.2

20.4 billion things will be connected by 2020.3

An animated film might render as much as 65 million hours of footage to come up with 90 minutes of worthwhile materials.4

An airplane will generate 40 terabytes of data a day by 2020.6

NASA’s Earth Observing System Data and Information System (EODSIS) distributes almost 28 terabytes of data a day.5

지구 40만 바퀴 HD 영화 2천편 세계인구 3배

7천 4백년 HD 영화 1만4천편 HD 영화 2만편

2

The 4th Industrial Revolution

Reproducibility is hard/impossible

3

Let Researchers be Researchers

Use laptops & desktop computers

Overwhelmed by data

Finding analysis ever more difficult; sharing even harder

Reproducibility by default

Reproducibility is hard/impossible

4

Let Researchers be Researchers

Do more with hyper-scale:• Service more users

• Run more projects

• Get results faster

• Run larger simulations

• Explore new insights(e.g., “What if?”)

Remove current limitations:• Modify more parameters

• Analyze more complex models

• Visualize larger results

• Run more iterations

• Generate higher fidelity results

• Simulate longer periods of time

5

What would you do with 100x the scale?

Demand for infrastructure

On-premises

On-premises

Big Compute

6

Demand for infrastructure

On-premises

Fixed capacity

Fixed capability

Siloed environments

Data analytic

s

AI IOT

New business demands

Regulations

Challenges with on-premises

7

Expand your environment to the cloud

Cloud

Demand for infrastructure

On-premises

Fixed demand

Variable demand

Big Compute opportunity

8

End to End Workflows in the cloud

Simply and optimize infrastructure

Create new services and modernize apps that matter

Start using cloud without rewriting applications

9

Azure for every Big Compute workload

Specialized

infrastructure

for

Big Compute

PGA Microservices –AI/Edge

IB Connected CPU/GPU/Storage available in cloudNC – Advanced simulation

ND– Artificial Intelligence

H N

F G

10

11

Infiniband

RDMAInfiniband Roadmap

Why Infiniband RDMA?

12

Cray in Azure

NAS

Object

Bucket 1

Bucket 2

Bucket n

Virtual Compute Farm

Virtual FXT

Physical FXT

Customer Needs Avere Delivers

Low-latency file access Edge-Core architecture

Scalable performance and HA Scale-out clustering (3 to 24 nodes per cluster)

Familiar NFS & SMB interfaces FlashCloudTM file system for object storage

Manage as a single pool of storage Global namespace (GNS), FlashMove®

Data protection Cloud snapshots, FlashMirror®

High security AES-256 encryption (FIPS 140-2 compliant), KMIP

Efficiency LZ4 compression

13

Performant

hybrid

storage

with Avere

Hybrid/Clustered Big Compute Lifecycle

Optimization

Provisioning

Cluster Configuration

Monitoring

14

CycleCloud

15 Video

Azure Batch

Azure BatchVM Management & Job Scheduling

Service / Solution

PaaSCloud Services

IaaSVM / VMSS

Hardware

Azure technology for high-energy physics computing

Video | Article

Situation: 3,000 international physicists working on ATLAS project have collected hundreds of petabytes of data and are now facing the challenge of storing, accessing and analyzing the information.

Solution: With the help of Microsoft Azure, researchers in the particle physics group at the University of Victoria created a flexible cloud system for large workloads such as high-energy physics computing. The system has streamlined user workflow, sparking a digital transformation in the research community.

University of Victoria

17

• DongAh University conducted a PoC to compare performance and usability of Azure against on-prem cluster.• OpenFOAM, a free and open-source CFD* software toolbox, was used for a ship resistance simulation.• Satisfied with the faster speed of Azure with virtually no overhead in data transfer and computation.

- MS cloud : E5-2667 v3 @3.2GHz, 16 cores/node w/ 10G Ethernet- Cluster : E5-2680 v4 @2.4GHz, 28 cores/node w/ Infiniband

Problem size : 1.2M mesh

*CFD=Computational Fluid Dynamics

18

PoC: Ship resistance analysis with OpenFOAM

Big Compute Challenges Azure solves these

Getting your workload into cloud Simple, managed access to Big Compute

Supporting hybrid use cases CycleCloud for burst, including data and executables

Moving data and apps Fast, repeatable, scalable deployment

Managing bandwidth, security, & users Cost, user, and access controls

Accessing the technology needed Leading high performance technologies running in cloud

Building cloud-native applications Azure Batch for resource provisioning and job scheduling

19

Conclusion : Making Big Compute a Reality

Thank You!

http://microsoft.com/hpc

20

top related