flamingo (fea) spark designer

22
초보자의 데이터 분석을 지원하는 Spark Designer 설계안 빅데이터본부 | 김병곤

Upload: byoung-gon-kim

Post on 21-Apr-2017

202 views

Category:

Data & Analytics


2 download

TRANSCRIPT

Page 1: Flamingo (FEA) Spark Designer

초보자의데이터 분석을지원하는 Spark Designer 설계안

빅데이터본부 | 김병곤

Page 2: Flamingo (FEA) Spark Designer

Spark Designer는 Flamingo 2.0의 Workflow Designer를모형으로하고있습니다.

Spark Designer의모형

Page 3: Flamingo (FEA) Spark Designer

Spark Designer는 Flamingo 2.0의 Workflow Designer를모형으로하고있습니다.

Flamingo 2.0 Workflow Designer의문제점

메타데이터관리부재

데이터소스로 HDFS만지원

MapReduce의한계

워크플로우구성의한계

Page 4: Flamingo (FEA) Spark Designer

Workflow Designer는기능적특성이다양하게반영되어야하나서로상충되는것이있어서두개의워크플로우디자이너로분리하였습니다.

Flamingo 2.0 WorkflowDesigner를분리

OozieDesigner

Spark Designer

WorkflowDesigner

Page 5: Flamingo (FEA) Spark Designer

• 초보자들이쉽게사용할수있는분석 UI 제공

• 전처리, 머신러닝등을통합할수있는 UI 제공

• 메타데이터를 관리할수기능제공

• 다양한데이터를처리할수있는통합기능제공

Spark Designer 목적

Page 6: Flamingo (FEA) Spark Designer

Spark Designer의유스케이스

Spark Designer는초보자들이쉽게사용할수있는 User Interface와 Framework를제공하는것을목표로합니다.

Preprocessing

기보유하고있는데이터를가공하는

역할 (머신러닝의입출력데이터를

가공하는데도가능)

Data QualityCheck

데이터품질관리(일관성, 유효성, 완

전성등…)를위한데이터처리

Machine Learning

기보유하고있는데이터를기반으로

Spark MLlib에서제공하는머신러닝

알고리즘을수행 (예; 추천)

DataSourceIntegration

RDBMS, File, HDFS 등등다양한데

이터소스의데이터를한번에처리

Page 7: Flamingo (FEA) Spark Designer

Spark Designer의구현에서고려해야할점

메타데이터관리

데이터의메타데이터관리어려움

UI Module의메타데이터관리필요

워크플로우관리 워크플로우표현의최적의방법은?

전처리/알고리즘Module 확장

Custom Module의확장

전처리Module의지원범위

머신러닝Module의지원범위

워크플로우실행

Spark Job을누가관리해야하는가?

Spark Job을실행하는주체?

Page 8: Flamingo (FEA) Spark Designer

메타데이터관리

• Hadoop의 MapReduce는병렬/분산처리방식에대한 Framework

• 고성능처리에는적합하지만메타데이터관리방안부재

• Apache Hive의 HCatalog도 대안이되기에는대중성및활용성떨어짐

• Spark 최신버전에서 RDD DataFrame DataSet 지원을통해메타데이터관리가가능해짐

• File의 Metadata 관리가가능하게되어기존의 Workflow Designer의메타데이터관리를보완

Page 9: Flamingo (FEA) Spark Designer

Spark의DataFrame& Dataset

• Unified Spark 2.0 API를통해데이터를관리하는방법이정교해짐

• 특히, Dataset의등장은컬럼정보의메타데이터관리를유연하게이끌수있음

• Dataset은 Java, Scala만 지원 (Python, R은 DataFrame만지원)

Page 10: Flamingo (FEA) Spark Designer

Spark의DataFrame& Dataset

• DataFrame & Dataset의가장큰강점은테이블구조의데이터표현가능

• 테이블구조의메타데이터정의가능

Page 11: Flamingo (FEA) Spark Designer

Spark의메타데이터처리

• 데이터에대한 POJO를 정의하여메타데이터化후데이터처리

• POJO를 지정해야하므로자유도가매우떨어짐

// $example on:schema_inferring$// Create an RDD of Person objects from a text fileJavaRDD<Person> peopleRDD = spark.read()

.textFile("examples/src/main/resources/people.txt")

.javaRDD()

.map(new Function<String, Person>() {@Overridepublic Person call(String line) throws Exception {

String[] parts = line.split(",");Person person = new Person();person.setName(parts[0]);person.setAge(Integer.parseInt(parts[1].trim()));return person;

}});

// Apply a schema to an RDD of JavaBeans to get a DataFrameDataset<Row> peopleDF = spark.createDataFrame(peopleRDD, Person.class);

// Register the DataFrame as a temporary viewpeopleDF.createOrReplaceTempView("people");

// SQL statements can be run by using the sql methods provided by sparkDataset<Row> teenagersDF = spark.sql("SELECT name FROM people WHERE age BETWEEN 13 AND 19");

Page 12: Flamingo (FEA) Spark Designer

Spark의메타데이터처리

• 데이터파일에대해서문자열기반의메타데이터등록가능

StructType schema = new StructType(new StructField[]{new StructField("PRODUCT_CLASSIFICATION", DataTypeUtils.getDataType("STRING"), true,

Metadata.empty()),new StructField("PRODUCT_NM", DataTypes.StringType, true, Metadata.empty()),new StructField("BRAND_LINE", DataTypes.StringType, true, Metadata.empty()),new StructField("USE_YN", DataTypes.StringType, true, Metadata.empty())

});

RelationalGroupedDataset grouped = ds.groupBy("PRODUCT_CLASSIFICATION");

Dataset<Row> count = grouped.count();Dataset<Row> agg = grouped.agg(

count("*").as("COUNT"),first("PRODUCT_NM").as("PRODUCT_NM"),first("BRAND_LINE").as("BRAND_LINE")

).filter("COUNT > 1");

agg.show();

Page 13: Flamingo (FEA) Spark Designer

• 이기종의데이터소스에서데이터를로딩하여결합 > 암호화 > 데이터품질검사 > 결과저장

Spark Designer의적용시나리오–데이터품질관리

HDFSInput

JDBC Input

HiveInput

Group ByModule

JoinModule

EncryptionModule

Data QualityModule

JDBC Output

Oozie Workflow 대비 Spark Workflow는하나의프로세스에서모두처리하여효율성이높음

Page 14: Flamingo (FEA) Spark Designer

• 이기종의데이터소스에서데이터를로딩하여결합 > 암호화 > 데이터품질검사 > 결과저장

Spark Designer의적용시나리오–추천엔진 (Collaborative Filtering)

HDFSInput

(Rating)

CleanETL

RemoveETL

ALS RecommendationJDBC Input(Movie Title)

추천을위해서복잡한과장을하나의 Spark Job으로모두구현할수있도록하는유연성제공

Model Evaluator

Model Exporter

HDFS Output

JoinETL

HDFS Output

Page 15: Flamingo (FEA) Spark Designer

Spark Designer UI Prototype

Browser

DataSource ETL Data Quality

Data Quality

Incompleteness

Data Mining

CF Recommendation

HDFSInput

(Rating)

CleanETL

RemoveETL

ALS RecommendationJDBC Input(Movie Title)

Model Evaluator

Model Exporter

HDFS Output

JoinETL

HDFS Output

File Output JDBC OutputHDFSInput

JDBC Input HDFS OutputFile Input Hive Input

RunSaveCopyName:

Page 16: Flamingo (FEA) Spark Designer

Directly Acyclic Graph (DAG)

• 단방향비순환그래프

• Spark Job 의처리는단방향비순환그래프의특성을가짐

• 하지만사람은디자이너를통해복잡한그래프를구성하기를 원함그래프해석필요

Page 17: Flamingo (FEA) Spark Designer

Directly Acyclic Graph (DAG)로해석하는그래프

HDFSInput

JDBC Input

HiveInput

Group ByModule

JoinModule

EncryptionModule

Data QualityModule

JDBC Output

HDFSInput

JDBC Input

HiveInput

Group ByModule

JoinModule

EncryptionModule

Data QualityModule

JDBC Output

Page 18: Flamingo (FEA) Spark Designer

GraphML을이용한DAG의표현

• 현재 Spark Designer에서표현한 DAG를어떻게표현할것인가를결정

• 기존 Flamingo Workflow Designer는자체 Workflow Schema를가짐

• 후보기술로, GraphML 고려

<?xml version="1.0" encoding="UTF-8"?><graphml xmlns="http://graphml.graphdrawing.org/xmlns"

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<graph id="G" edgedefault="undirected"><node id="n0"/><node id="n1"/><edge id="e1" source="n0" target="n1"/>

</graph>

</graphml>

Page 19: Flamingo (FEA) Spark Designer

Graph Algorithm을통해DAG의순서를결정

• Spark Designer의 Workflow를 그래프로구성하여실행순서를결정

DirectedGraph<Module, DefaultEdge> g = new DefaultDirectedGraph<>(DefaultEdge.class);

Module hdfsInput = new Module("HDFS Input", "INPUT", "HDFS_INPUT");Module jdbcInput = new Module("JDBC Input", "INPUT", "JDBC_INPUT");…Module join = new Module("Join", "ETL", "JOIN");Module encryption = new Module("Encryption", "DQ", "ENC");Module dataQuality = new Module("Data Quality", "DQ", "DQ");Module jdbcOutput = new Module("JDBC Output", "OUTPUT", "JDBC_OUTPUT");

// add the verticesg.addVertex(hdfsInput);g.addVertex(jdbcInput);…g.addVertex(join);g.addVertex(encryption);g.addVertex(dataQuality);g.addVertex(jdbcOutput);

g.addEdge(hdfsInput, groupBy);g.addEdge(jdbcInput, groupBy);...g.addEdge(join, encryption);g.addEdge(encryption, dataQuality);g.addEdge(dataQuality, jdbcOutput);

OrderIterator<Module, DefaultEdge> orderIterator= new OrderIterator<Module, DefaultEdge>(g);

while (orderIterator.hasNext()) {Module module = orderIterator.next();System.out.println(module);

}

HDFS InputJDBC InputHive InputGroup ByJoinEncryptionData QualityJDBC Output

Page 20: Flamingo (FEA) Spark Designer

Spark Designer의Workflow 실행하기

Designer Controller

Workflow Service

Metadata Resolver

Execution Planner

Closest FirstExecution Plan

Topological OrderExecution Plan

Depth FirstExecution Plan

Random WalkExecution Plan

Breadth FirstExecution Plan

Metadata Repository

Oozie Workflow Builder

Oozie Workflow Service

Oozie Workflow Runner

Apache Oozie

Spark Designer Job

Page 21: Flamingo (FEA) Spark Designer

앞으로좀더고민해야할일

• Spark Designer의 Workflow를 XML로표현할지, Database로표현할지

• Spark Designer의 Custom Module을 추가할수있는지에대한검토및설계

• Spark Designer의 Module간 메타데이터를어떻게처리할지?

• Spark Designer의 Module간 연결상태가변경되었을 때어떻게메타데이터를 처리할것인지?

• Oozie Workflow Designer의 Module로활용할수있는방안고려

Page 22: Flamingo (FEA) Spark Designer

감사합니다

빅데이터본부 | 김병곤