lawrence livermore labs talk 2011

32
05/18/2022 © MapR Confidential 1 MapR Architecture and Machine Learning 1

Upload: mapr-technologies

Post on 15-Jan-2015

255 views

Category:

Technology


1 download

DESCRIPTION

These slides are from a talk Ted Dunning gave at Lawrence Livermore Labs in 2011. The talk gives an architectural outline of the MapR system and then discusses how this architecture facilitates large scale machine learning algorithms.

TRANSCRIPT

Page 1: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 1

MapR Architecture and Machine Learning

1

Page 2: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 2

Outline

• MapR system overview• Map-reduce review• MapR architecture• Performance Results• Map-reduce on MapR

• Machine learning on MapR

Page 3: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 3

Map-Reduce

Input Output

Shuffle

Page 4: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 4

Bottlenecks and Issues

• Read-only files• Many copies in I/O path• Shuffle based on HTTP• Can’t use new technologies• Eats file descriptors

• Spills go to local file space• Bad for skewed distribution of sizes

Page 5: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 5

MapR Improvements

• Faster file system• Fewer copies• Multiple NICS• No file descriptor or page-buf competition

• Faster map-reduce• Uses distributed file system• Direct RPC to receiver• Very wide merges

Page 6: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 6

MapR Innovations

• Volumes• Distributed management• Data placement

• Read/write random access file system• Allows distributed meta-data• Improved scaling• Enables NFS access

• Application-level NIC bonding• Transactionally correct snapshots and mirrors

Page 7: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 7

Each container contains Directories & files Data blocks

Replicated on servers No need to manage

directly

MapR's ContainersFiles/directories are sharded into blocks, whichare placed into mini NNs (containers ) on disks

Containers are 16-32 GB segments of disk, placed on nodes

Page 8: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 8

Container locations and replication

CLDB

N1, N2

N3, N2

N1, N2

N1, N3

N3, N2

N1

N2

N3

Container location database (CLDB) keeps track of nodes hosting each container

Page 9: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 9

Containers represent 16 - 32GB of data Each can hold up to 1 Billion files and directories 100M containers = ~ 2 Exabytes (a very large cluster)

250 bytes DRAM to cache a container 25GB to cache all containers for 2EB cluster

But not necessary, can page to disk Typical large 10PB cluster needs 2GB

Container-reports are 100x - 1000x < HDFS block-reports Serve 100x more data-nodes Increase container size to 64G to serve 4EB cluster

Map/reduce not affected

MapR Scaling

Page 10: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 10

MapR's Streaming Performance

Read Write0

250

500

750

1000

1250

1500

1750

2000

2250

Read Write0

250

500

750

1000

1250

1500

1750

2000

2250

HardwareMapRHadoopMB

persec

Tests: i. 16 streams x 120GB ii. 2000 streams x 1GB

11 x 7200rpm SATA 11 x 15Krpm SAS

Higher is better

Page 11: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 11

Terasort on MapR

1.0 TB0

10

20

30

40

50

60

3.5 TB0

50

100

150

200

250

300

MapRHadoop

Elapsed time (mins)

10+1 nodes: 8 core, 24GB DRAM, 11 x 1TB SATA 7200 rpm

Lower is better

Page 12: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 12

MUCH faster for some operations

# of files (millions)

Teststopped

hereCreateRate

Same 10 nodes …

Page 13: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 14

NFS mounting models

• Export to the world• NFS gateway runs on selected gateway hosts

• Local server• NFS gateway runs on local host• Enables local compression and check summing

• Export to self• NFS gateway runs on all data nodes, mounted

from localhost

Page 14: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 15

Export to the world

NFSServerNFS

ServerNFSServerNFS

ServerNFSClient

Page 15: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 16

Client

NFSServer

Local server

Application

Cluster Nodes

Page 16: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 17

ClusterNode

NFSServer

Universal export to self

Application

Cluster Nodes

Page 17: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 18

ClusterNode

NFSServer

Application

ClusterNode

NFSServer

Application

ClusterNode

NFSServer

Application

Nodes are identical

Page 18: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 19

Sharded text indexing

• Mapper assigns document to shard• Shard is usually hash of document id

• Reducer indexes all documents for a shard• Indexes created on local disk• On success, copy index to DFS• On failure, delete local files

• Must avoid directory collisions • can’t use shard id!

• Must manage local disk space

Page 19: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 20

Conventional data flows

MapReducer

Input documents

Localdisk Search

EngineLocal

disk

Clustered index storage

Failure of a reducer causes garbage to accumulate in the

local disk

Failure of search engine requires

another download of the index from clustered storage.

Page 20: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 21

SearchEngine

Simplified NFS data flows

MapReducer

Input documents

Clustered index storage

Failure of a reducer is cleaned up by

map-reduce framework

Search engine reads mirrored index directly.

Page 21: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 22

Application to machine learning

• So now we have the hammer

• Let’s see some nails!

Page 22: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 23

K-means

• Classic E-M based algorithm• Given cluster centroids,• Assign each data point to nearest centroid• Accumulate new centroids• Rinse, lather, repeat

Page 23: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 24

Aggregatenew

centroids

K-means, the movie

Assignto

Nearestcentroid

Centroids

Input

Page 24: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 25

But …

Page 25: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 26

Averagemodels

Parallel Stochastic Gradient Descent

Trainsub

model

Model

Input

Page 26: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 27

Updatemodel

Variational Dirichlet Assignment

Gathersufficientstatistics

Model

Input

Page 27: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 28

Old tricks, new dogs

• Mapper• Assign point to cluster• Emit cluster id, (1, point)

• Combiner and reducer• Sum counts, weighted sum of points• Emit cluster id, (n, sum/n)

• Output to HDFS

Read fromHDFS to local disk by distributed cache

Written by map-reduce

Read from local disk from distributed cache

Page 28: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 29

Old tricks, new dogs

• Mapper• Assign point to cluster• Emit cluster id, 1, point

• Combiner and reducer• Sum counts, weighted sum of points• Emit cluster id, n, sum/n

• Output to HDFSMapR FS

Read fromNFS

Written by map-reduce

Page 29: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 30

Click modeling architecture

Featureextraction

anddown

sampling

Input

Side-data

Datajoin

SequentialSGD

Learning

Map-reduce

Now via NFS

Page 30: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 31

Poor man’s Pregel

• Mapper

• Lines in bold can use conventional I/O via NFS

31

while not done: read and accumulate input models for each input: accumulate model write model synchronize reset input formatemit summary

Page 31: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 32

Trivial visualization interface

• Map-reduce output is visible via NFS

• Legacy visualization just works

$ R> x <- read.csv(“/mapr/my.cluster/home/ted/data/foo.out”)> plot(error ~ t, x)> q(save=‘n’)

Page 32: Lawrence Livermore Labs talk 2011

04/10/2023 © MapR Confidential 33

Conclusions

• We used to know all this• Tab completion used to work• 5 years of work-arounds have clouded our

memories

• We just have to remember the future