yarn and ntt's contribution @ cloudera world tokyo 2013
DESCRIPTION
Presentation @ Cloudera World Tokyo 2013TRANSCRIPT
© 2012 NTT Software Innovation Center
NTT meets Hadoop- Our contribution to Hadoop and YARN - @Cloudera World Tokyo 2013
2013/11/7NTT
Tsuyoshi Ozawa
2 © 2013 NTT Software Innovation Center
• Tsuyoshi Ozawa• Researcher & Engineer @ NTTTwitter: @oza_x86_64
• A Hadoop Contributor• Author of “Hadoop 徹底入門 2nd Edition”Chapter 22(YARN)
About me
3 © 2013 NTT Software Innovation Center
• NTT and Hadoop• Why Hadoop?• Our Hadoop usage• Our contribution to Apache Hadoop
• Technical hot topic at Hadoop Community• Hot topic of each Hadoop components• YARN• What’s YARN?• WIPs:
• ResourceManager HA• Llama(long lived application master)• Long running services
• Summary
Agenda
4 © 2013 NTT Software Innovation Center
• Deep and wide experience introducing Open Source Software technologies.
• For the data management, 11 years with PostgreSQL including mission critical cases
• Leading Hadoop deployment in Japan for large scale and high volume data processing, which natively fits “Big Data” and "Enterprise Batch"
Why Hadoop?
sec
min
hour
dayBig Data
Processing
Late
ncy
Size
Online Processing
GB TB PB
Online BatchProcessing
RDBMS
Low-Latency Serving Systems
DWH, Search Engine, etc
Hadoop
Query & Search Processing
Enterprise BatchProcessing
5 © 2013 NTT Software Innovation Center
• Schema on Write• Traditional Distributed DB’s approach
• Pros• Minimal overhead at query time
• Cons• Solid schema and workload• High overhead at load time
Why is RDBMS better for smaller data?
Column1Column3
Column2Column4
Schema and workload awareData Load
Query(Join Column1 and Column3Based on id)
E.g. Column store
6 © 2013 NTT Software Innovation Center
• Schema on Read• Hadoop’s approach• Relational mapping on Read time(processing time)
• Pros• Flexible schema and workload• Minimal overhead at load time
• Cons• High overhead at query time
Why is Hadoop better for bigger data?
Col1-1Col2-1Col3-1Col4-1
Col1-2Col2-2Col3-2Col4-2
Scalable loading
Query(Reading all data andRuntime filtering)
E.g. HDFS + MapReduce
7 © 2013 NTT Software Innovation Center
• Mobile Spatial Statistics (NTT Docomo)• http://www.nttdocomo.co.jp/english/corporate/
technology/rd/technical_journal/bn/vol14_3/index.html
• Historical search for Twitter’s data (NTT DATA)
• Buzz Finder (NTT Communications)• Twitter analytics • Hadoop Summit 2011
• http://www.slideshare.net/cloudera/hadoop-world-2011-large-scale-log-data-analysis-for-marketing-in-ntt-communications
We’re Hadoop user! (Service)
8 © 2013 NTT Software Innovation Center
• NTT DATA has over 6 years experience and over 30 production cases on Hadoop
• Help enterprise customer design, integrate, deploy and run large clusters at the range of 20 ~ 1200+ nodes, up to 4PB
We’re Hadoop user! (System Integrations)
sec
min
hour
day
Big Data Processing
Online Processing
GB TB PB
Enterprise BatchProcessing
Online BatchProcessing Query & Search
Processing
financial
mediapublic
media
telcom
telcom
public
telcom
9 © 2013 NTT Software Innovation Center
• Impedance mismatch between Hadoop community and our needs
• Examples of our needs• More easy operations(HA features) • More metrics• More documents• More understandable logging
• Our answer: *Writing code* is the fastest way to reflect our needs!
• Bridging the gap• Accelerating development for new features• Getting know-how about new features
Why are we contributing to Apache Hadoop?
Important for bothCommunity and us
Important for us,But its priority is not so high
for community
10 © 2013 NTT Software Innovation Center
• More easy Operations• Enhancing HA features
• Rethinking State Machine of HA componentsfrom operators’ view
• More wider use cases Optimization of MapReduce
• Node-level combiner for MapReduce(MAPREDUCE-4502)
• Presentation at Pre Strata/Hadoop Meetup at NEWYORK
• Container reuse(MAPREDUCE-3902)
Example:
11 © 2013 NTT Software Innovation Center
• Before: Returns IllegalArgumentException if configuration is invalid
• Diffi cult to debug configuration!
Example: When startup RM-HA…
12 © 2013 NTT Software Innovation Center
• After: Users can know why it fails
Example: When startup RM-HA…
13 © 2013 NTT Software Innovation Center
• Contributions• ResourceManager HA• Stabilization of MapReduce• Wrote a YARN article for Hadoop book in Japanese
• Problem• Time lag
• Silicon Valley is the core developing time of Hadoop• PST and JST has 17 hours time lag
• Offl ine meetups • Some offl ine meetups are held every week
• Solution• Visiting Silicon Valley and develoing Hadoop there
My current Activities
14 © 2013 NTT Software Innovation Center
• NTT DATA members are contributing to Hadoop!
• Akira Ajisaka / @ajis_ka• Kousuke Saruta• Masatake Iwasaki• Shinichi Yamashita
• They are contributing based on Hadoop integration experiences for enterprise customer!
• Enhance Metrics, Logging, Monitoring for robust design and rock-solid operations
• Adding/Improving documentations• Various bug fixes (HADOOP-9909, HIVE-5296, etc.)• Integration is our matter:
“Direct Connector for PostgreSQL” (SQOOP-390,999)
NTT DATA members’ contributions
15 © 2013 NTT Software Innovation Center
TECHNICAL PART
16 © 2013 NTT Software Innovation Center
• MapReduce• Shuffl e Plugin(MAPREDUCE-4049)• JobTracker(AppMaster) HA(MAPREDUCE-2708)• MapReduce itself is at stable phase
• Optimization is being done in Apache Tez/Spark/Impala project
• HDFS• Cache management(HDFS-4949)• Snapshot(HDFS-2802)• Symbolic links(HADOOP-6421 etc.)
• YARN(New!)A new component for Hadoop 2.x
Hot topics of each Hadoop components
17 © 2013 NTT Software Innovation Center
• Yet Another Resource Negotiator• Proposed by Arun C Murthy in 2011• Separate JobTracker’s role
• Resource Management/Isolation• Task Scheduling
• MapReduce v2 is a name for MapReduce over YARN
What’s YARN?
MapReduce
MRv1 architecture
MRv2
YARN and frameworks architecture
YARN
Impala Spark
18 © 2013 NTT Software Innovation Center
• Running various processing frameworkson same cluster
• Batch processing with MapReduce• Interactive query with Impala• Interactive deep analytics(e.g. Machine Learning)
with Spark
Why YARN?(Use case)
MRv2
YARN
HDFS
Impala Spark
Periodic long batchquery
InteractiveAggregationquery
InteractiveMachine Learningquery
19 © 2013 NTT Software Innovation Center
• More effective resource management for multiple processing frameworks
• diffi cult to use entire resources without thrashing• Cannot move *Real* big data from HDFS/S3
Why YARN?(Technical reason)
Master for MapReduce Master for Impala
Slave
Impala slavemap slot reduce slot
MapReduce slave
Slave Slave Slave
HDFS slave
Each frameworks has own scheduler Job2Job1 Job1
thrashing
20 © 2013 NTT Software Innovation Center
• Resource is managed by JobTracker• Task Scheduling• Resource Management
MRv1 Architecture
Master for MapReduce
Slave
map slot reduce slot
MapReduce slave
Slave
map slot reduce slot
MapReduce slave
Slave
map slot reduce slot
MapReduce slave
Master for Impala
Schedulers only now own resource usages
21 © 2013 NTT Software Innovation Center
• Idea• One global resource manager(ResourceManager)• Common resource pool for all
frameworks(NodeManager and Container)• Schedulers for each frameworks(AppMaster)
YARN Architecture
ResourceManager
Slave
NodeManager
Container Container Container
Slave
NodeManager
Container Container Container
Slave
NodeManager
Container Container Container
Master Slave Slave MasterSlave SlaveMaster Slave Slave
Client1. Submit jobs
2. Launch Master 3. Launch Slaves
22 © 2013 NTT Software Innovation Center
YARN and Mesos
YARN• AppMaster is launched for each jobs
• More scalability• Higher latency
• One container per req • One Master per Job
Mesos
• AppMaster is launched for each app(framework)
• Less scalability• Lower latency
• Bundle of containers per req
• One Master per Framework
ResourceManager
NM NM NM
ResourceMaster
Slave Slave Slave
Master1
Master2
Master1 Master2
Policy/Philosophy is different
23 © 2013 NTT Software Innovation Center
• From Hadoop World 2013• YARN is becoming a real open kernel!https://twitter.com/ajis_ka/status/395572875400605696/photo/1/large
Applications?
24 © 2013 NTT Software Innovation Center
• ResourceManager High Availability(YARN-149, YARN-128)
• llama(Long-lived Application MAster)• Long lived services in YARN(YARN-896)
Hot topics in YARN
25 © 2013 NTT Software Innovation Center
• What’s happen when ResourceManager fails?
• cannot submit new jobs• NOTE:
• Launched Apps continues to run• AppMaster recover is done in each frameworks
• MRv2
ResourceManager High Availability
ResourceManager
Slave
NodeManager
Container Container Container
Slave
NodeManager
Container Container Container
Slave
NodeManager
Container Container Container
Master Slave Slave MasterSlave SlaveMaster Slave Slave
ClientSubmit jobs
Continue to run each jobs
26 © 2013 NTT Software Innovation Center
• Approach• Storing RM information to ZooKeeper• Automatic Failover by ZKFC(ZooKeeper Failover
Controller)• Manual Failover by RMHAUtils• NodeManagers uses local RMProxy to access them
ResourceManager High Availability
ResourceManagerActive
ResourceManagerStandby
ZooKeeper ZooKeeper ZooKeeper
2. failure
3. ZKFCDetectsfailure
ZKFC ZKFC
4. Failover
RMState RMState RMState
1. Active Node storesall state into ZKStore
3. StandbyNode becomeactive
27 © 2013 NTT Software Innovation Center
• YARN’s resource allocation is optimized for long batch system(like Hadoop!)
• Impala/Spark is low-latency querying system
• Impedance mismatch between YARN and Impala/Spark• http://cloudera.github.io/llama/
• Idea: Special AppMaster for low latency apps
• AppMaster pooling• As a resource negotiators’
proxy
• Gang Scheduling• Get multiple containers
at the same time
Llama (Long-lived Application MAster)
ResourceManager
NM NM NM
Llama(Master)
http://cdn.oreillystatic.com/en/assets/1/event/100/From%20Promise%20to%20a%20Platform_%20Next%20Steps%20in%20Bringing%20Workload%20Diversity%20to%20Hadoop%20Presentation.pdf
ImpalaServer
1. Launch llama(first time only)
2. SubmitJob
3. Resource allocation
28 © 2013 NTT Software Innovation Center
• YARN as Multitenant platform• Like CloudFoundry, Mesosphere• Hoya: HBase on YARN
• http://hortonworks.com/blog/hoya-hbase-on-yarn-application-architecture/
Long lived services for YARN
29 © 2013 NTT Software Innovation Center
• We use Hadoop• We contribute to Hadoop• Described Hadoop’s hot topics based on our experience in Apache Hadoop
• YARN is now real open kernel for BigData!(not only for Hadoop)
• ResourceMamanger HA• Llama• Long running services
Summary
30 © 2013 NTT Software Innovation Center
• "Hadoop - Lessons Learned from Deploying Enterprise Clusters" (Hadoop World NYC 2010) http://www.slideshare.net/cloudera/hadoop-world-2010-nyc-v12recruitclean
• "Hadoop’s Life in Enterprise Systems" (Hadoop World NYC 2011) http://www.slideshare.net/cloudera/hadoop-world-2011-hadoops-life-in-enterprise-systems-y-masatani-ntt-data
• NTT Docomo Technical Journal, Mobile Spatial Statisticshttp://www.nttdocomo.co.jp/english/corporate/technology/rd/technical_journal/bn/vol14_3/index.html
• Large Scale Log Data Analysis for Marketing in NTT Communications(Hadoop World 2011)http://www.slideshare.net/cloudera/hadoop-world-2011-large-scale-log-data-analysis-for-marketing-in-ntt-communications
• "NTT データの Hadoop ソリューション " http://oss.nttdata.co.jp/hadoop/
• "SI 事業の視点から見た Hadoop の適用領域と今後の展望 " (Hadoop Conference Japan 2009)http://www.slideshare.net/hadoopxnttdata/20091113-hadoop-conf-japan2009-v1a-clean
Links
31 © 2013 NTT Software Innovation Center
• The Next Generation of Apache Hadoop MapReducehttp://developer.yahoo.com/blogs/hadoop/next-generation-apache-hadoop-mapreduce-3061.html
• Llama: Low Latency Application Masterhttp://cloudera.github.io/llama
• Resource Management with YARN and Impala http://goo.gl/Rwq2aW
• Hoya: HBase on YARNhttp://hortonworks.com/blog/hoya-hbase-on-yarn-application-architecture/
• Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center, NSDI 2011http://www.cs.berkeley.edu/~matei/papers/2011/nsdi_mesos.pdf
• Apache Hadoop YARN: Yet Another Resource Negotiator, SOCC 2013http://goo.gl/Gnl9ZU
• Apache Tezhttp://hortonworks.com/hadoop/tez/http://incubator.apache.org/projects/tez.html
Links(YARN)