disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 ·...

125
저작자표시-비영리-변경금지 2.0 대한민국 이용자는 아래의 조건을 따르는 경우에 한하여 자유롭게 l 이 저작물을 복제, 배포, 전송, 전시, 공연 및 방송할 수 있습니다. 다음과 같은 조건을 따라야 합니다: l 귀하는, 이 저작물의 재이용이나 배포의 경우, 이 저작물에 적용된 이용허락조건 을 명확하게 나타내어야 합니다. l 저작권자로부터 별도의 허가를 받으면 이러한 조건들은 적용되지 않습니다. 저작권법에 따른 이용자의 권리는 위의 내용에 의하여 영향을 받지 않습니다. 이것은 이용허락규약 ( Legal Code) 을 이해하기 쉽게 요약한 것입니다. Disclaimer 저작자표시. 귀하는 원저작자를 표시하여야 합니다. 비영리. 귀하는 이 저작물을 영리 목적으로 이용할 수 없습니다. 변경금지. 귀하는 이 저작물을 개작, 변형 또는 가공할 수 없습니다.

Upload: others

Post on 05-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

저 시-비 리- 경 지 2.0 한민

는 아래 조건 르는 경 에 한하여 게

l 저 물 복제, 포, 전송, 전시, 공연 송할 수 습니다.

다 과 같 조건 라야 합니다:

l 하는, 저 물 나 포 경 , 저 물에 적 된 허락조건 명확하게 나타내어야 합니다.

l 저 터 허가를 면 러한 조건들 적 되지 않습니다.

저 에 른 리는 내 에 하여 향 지 않습니다.

것 허락규약(Legal Code) 해하 쉽게 약한 것 니다.

Disclaimer

저 시. 하는 원저 를 시하여야 합니다.

비 리. 하는 저 물 리 목적 할 수 없습니다.

경 지. 하는 저 물 개 , 형 또는 가공할 수 없습니다.

Page 2: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Thesis for the Degree of Doctor of Philosophy

IntelligentEdge: Joint Communication,Computation, Caching, and Control in

Collaborative Multi-access Edge Computing

Anselme Ndikumana

Department of Computer Science & EngineeringGraduate School

Kyung Hee UniversitySouth Korea

August 2019

Page 3: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

IntelligentEdge: Joint Communication,Computation, Caching, and Control in

Collaborative Multi-access Edge Computing

Anselme Ndikumana

Department of Computer Science & EngineeringGraduate School

Kyung Hee UniversitySouth Korea

August 2019

Page 4: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung
Page 5: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Dedicated To

The Almighty God, my beloved wife, children, family, friends, and all the

people who helped me to accomplish this dissertation.

Page 6: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Abstract

The concept of cloud computing was introduced to provide computer system resources in terms of

storage, computing, applications, and other IT services on demands and over the Internet. In other

words, cloud computing delivers hosted services from remote servers over the Internet. However,

due to limited backhaul capacity of the wireless network, using remote servers can be experienced

by both high end-to-end delay and backhaul bandwidth consumption. To address this challenge,

Multi-access Edge Computing (MEC) was introduced, where MEC servers are deployed at the

edge of the network near to the edge devices for supporting cloud computing through minimizing

backhaul bandwidth consumption and end-to-end delay. In other words, MEC servers help cloud

computing by alleviating the load on cloud data centers and provide cloud-computing services to

the edge devices.

MEC server allows real-time access to radio network information, where a network operator

can permit third-parties operators to implement edge applications and services towards edge de-

vices. However, adopting MEC in mobile network environments has some issues concerning the

coordination of services from MEC server and mobile network. In addition, in the future, anything

that can be connected to a network will be connected. Therefore, with this increase in the number

of connected things and user devices, it will be not only the people who generated data but also

things/machines. Therefore, this era of connected things and people will create tsunami of data

from the edge devices. Furthermore, offloaded tasks and data from edge devices may arrive at

MEC server with different rate and speed for immediately processing such as real-time analytics

for mission-critical applications. It will be challenging for MEC server to handle such tasks and

data because of their timeliness, scale, and diversity. To overcome this challenge, MEC server

should be equipped with big data platform and applications that can help in performing fast and

i

Page 7: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

ii

parallel processing of offloaded tasks and data in a distributed manner. In addition, MEC server

should be able to split data volume, distribute computation tasks and data to various computing

nodes, replicate partitioned data, and recover data partitions in case necessary. However, compar-

ing cloud computing with MEC server, MEC server’ resources are limited. Therefore, without

collaboration among MEC servers, MEC server cannot handle all demands stemming in terms

of communication, computation, and storage from edge devices. Consequently, the MEC server

cannot significantly minimize delay and backhaul bandwidth consumption. To address these chal-

lenges, the collaboration between MEC servers with a joint Computing, Caching, Communication,

and Control (4C) approach is required.

In this dissertation, we propose a joint 4C framework in collaborative MEC. Fist, we propose

big-data MEC server structure and a new approach in MEC for forming the collaboration space of

MEC servers using an overlapping k-Means method. Collaboration space enables the collabora-

tion among MEC servers for handling big data stemming from connected edge devices. Second,

we formulate an optimization problem of joint 4C in collaborative MEC that optimizes both the

network latency and bandwidth consumption. However, the formulated optimization problem is

intractable due to its non-convexity structure. Third, to handle the formulated problem, we pro-

pose a proximal upper bound problem, which is convex and upper-bound problem of the formu-

lated problem. Then, to solve the proximal upper bound problem, we use Successive Upper-bound

Minimization (BSUM) as distributed algorithm that allows to break down the problem into small

subproblems, in which each subproblem can be handled separately. We choose the BSUM method

over other distributed algorithms because it is a new approach for big-data optimization in terms of

scalability, computational efficiency, and parallel implementation. Fourth, as an application of the

joint 4C framework, we propose caching approach for self-driving cars, where caching decision

and content retrieval are based on passengers’ features obtained using deep learning and available

communication, caching, and computation resources. We formulate an optimization problem for

our caching approach that minimizes total delay for retrieving contents subject to communication,

computation, and caching resource constraints. Finally, using realistic datasets, the simulation re-

sults demonstrate that our approach minimizes both bandwidth consumption and network latency

and meets computation deadline requirement.

Page 8: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Acknowledgement

First and foremost, I praise the Almighty God for giving me courage, strength, and patience in this

Ph.D. journey.

I would like to take this opportunity to express my sincere thanks to my honorable advisor Profes-

sor Choong Seon Hong for his patience, motivation, and support during my study and research.

His guidance helped me to enhance my research and writing this dissertation. The current achieve-

ment would not be recognizable without his daily advice and supports.

This dissertation comes out in its current form due to the guidance and supports from several

people. Besides my advisor, I also take this opportunity to thank my honorable thesis committee

members for their constructive comments on my work, which significantly helped me to enhance

the quality of this dissertation. My sincere thanks also go to Professor Tran Hoang Nguyen who

has supported me a lot in my study and research. I also thank all the members of Intelligent

Networking Lab for their generous support, cooperation, and encouragement during my study at

Kyung Hee University.

I would like to express my gratitude and appreciation to the Kyung Hee University University,

especially for providing a favorable studying environment for my Ph.D. journey.

Anselme Ndikumana

iii

Page 9: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Table of Contents

Abstract i

Acknowledgment iii

Table of Contents iv

List of Figures vii

1 Introduction 1

1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Challenges of MEC in Dealing with Big Data . . . . . . . . . . . . . . . . . . . 3

1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3.1 Collaboration Space for Multi-access Edge Computing . . . . . . . . . . 4

1.3.2 Joint 4C for Collaborative Multi-access Edge Computing . . . . . . . . . 5

1.3.3 Deep Learning Based Caching for Self-Driving Cars in Multi-access Edge

Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.4.1 MEC System Architectures and Standards . . . . . . . . . . . . . . . . . 7

1.4.2 Joint Computation and Caching (2C) . . . . . . . . . . . . . . . . . . . 8

1.4.3 Joint Big Data and Caching . . . . . . . . . . . . . . . . . . . . . . . . 9

1.4.4 Joint Communication and Caching (2C) . . . . . . . . . . . . . . . . . . 9

1.4.5 Joint Communication, Computation, and Caching (3C) . . . . . . . . . . 10

1.4.6 Caching for Cars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

iv

Page 10: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

v

Chapter 2 Collaborative Multi-access Edge Computing 13

2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Background and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.4 Collaboration Space in Multi-access Edge Computing . . . . . . . . . . . . . . . 18

2.4.1 Collaboration Space Formation . . . . . . . . . . . . . . . . . . . . . . 18

2.4.2 Overlapping k-Means Method for Collaboration Space (OKM-CS) . . . . 20

2.5 Simulation Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Chapter 3 Joint 4C in Collaborative Multi-access Edge Computing 25

3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2 Background and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 System Model for Joint 4C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3.1 Communication Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3.2 Computation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.3.2.1 Local Computation Model at Edge Device . . . . . . . . . . . 30

3.3.2.2 Computation Model at MEC Server . . . . . . . . . . . . . . . 32

3.3.3 Caching Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.3.4 Control Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.4 Problem Formulation and Solution . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.4.1 Overview of BSUM Method . . . . . . . . . . . . . . . . . . . . . . . . 37

3.4.2 Proposed Solution: Distributed Optimization Control Algorithm . . . . . 40

3.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.5.1 Simulation Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.5.1.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . 46

3.5.2 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Page 11: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

vi

Chapter 4 Deep Learning Based Caching for Self-Driving Cars in Multi-access Edge

Computing 55

4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.2 Background and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.2.1 Challenges for Caching in Self-Driving Cars . . . . . . . . . . . . . . . 57

4.2.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.3 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.3.1 Deep Learning and Recommendation Model . . . . . . . . . . . . . . . 63

4.3.1.1 Multi-Layer Perceptron (MLP) Model . . . . . . . . . . . . . 63

4.3.1.2 Convolutional Neural Network (CNN) Model . . . . . . . . . 65

4.3.1.3 Recommendation Model . . . . . . . . . . . . . . . . . . . . 67

4.3.2 Communication Model . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.3.3 Caching Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

4.3.4 Computation Model for Cached Content . . . . . . . . . . . . . . . . . . 76

4.3.5 Control Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4 Problem Formulation and Solution . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.4.2 Proposed Solution: Distributed Optimization Control Algorithm . . . . . 81

4.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.5.1 Simulation Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.5.2 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Chapter 5 Conclusion and Future Directions 95

5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Bibliography 98

Appendix A List of Publications 111

Page 12: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

List of Figures

1.1 MEC deployment in 5G networks [12]. . . . . . . . . . . . . . . . . . . . . . . 2

2.1 System model for collaborative MEC. . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 Big Data MEC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.3 Collaboration space formation (r = 100) using elbow method [76]. . . . . . . . . 22

2.4 Collaboration space formation (r = 500) using OKM-CS. . . . . . . . . . . . . . 22

2.5 Collaboration space formation (r = 1000) using OKM-CS. . . . . . . . . . . . . 23

2.6 Collaboration space formation (r = 2000) using OKM-CS. . . . . . . . . . . . . 23

3.1 Illustration of collaboration space with three communication scenarios (a), (b),

and (c) (Ref. 3.3.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2 Optimal value of Bj (3.33) (without rounding). . . . . . . . . . . . . . . . . . . 48

3.3 Optimal value of Bj + ξ∆ (after rounding). . . . . . . . . . . . . . . . . . . . . 48

3.4 CDF of computation throughput. . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.5 Transmission delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.6 Computation delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.7 Normalized cache hits in collaboration space. . . . . . . . . . . . . . . . . . . . 50

3.8 Generated content ranking using Zipf distribution [113]. . . . . . . . . . . . . . 52

3.9 Bandwidth saving due to caching. . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.1 The impact of users’ features in choosing contents. [89]. . . . . . . . . . . . . . 59

4.2 The system model of caching in the self-driving car using deep learning. . . . . . 61

4.3 Recommendation model for self-driving car. . . . . . . . . . . . . . . . . . . . . 67

vii

Page 13: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

viii

4.4 RSU selection process for self-driving car. . . . . . . . . . . . . . . . . . . . . . 70

4.5 RSUs deployment, where each RSU has one MEC servers for 4C. . . . . . . . . 87

4.6 Minimization of loss function for predicting movies that needed at the edge (at the

RSUs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.7 An example of the top 8 movies that need to be cached at RSU 1. . . . . . . . . . 89

4.8 Age and gender-based clustering for passengers in a self-driving car. . . . . . . . 90

4.9 An example of top the 8 recommended movies to cache in a self-driving car. . . . 91

4.10 Normalized cache hits for the self-driving car. . . . . . . . . . . . . . . . . . . . 92

4.11 Total delay minimization problem. . . . . . . . . . . . . . . . . . . . . . . . . . 92

Page 14: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Chapter 1Introduction

This chapter presents a brief background of Multi-access Edge Computing and its challenges.

Then, we present our contribution for overcoming the identified challenges. Finally, we discuss

the related works and novelties of this dissertation over related works.

1.1 Background and Motivation

Over the last few decades, users play the role of both producers and consumers of data in various

domains such as smart transportation, smart home, e-health, and smart city, where edge devices

such as such as smartphones, tablets, and things are embedded with various sensors [1] for creat-

ing and collecting domain-related data. In addition, if we only consider Internet of Things (IoT)

devices, it is estimated that there will be 22 billion of connected things by year 2025, without in-

cluding user devices such as tablets, laptops, and smartphones [2]. Therefore, with this increase in

the number of things, the user devices will be connected to anything, anywhere, and anytime [3].

With this massive interconnection of things and people will create huge growth of data traffic

from edge devices with different characteristics and forms such as structured, unstructured, semi-

structured, and quasi-structured data. Therefore, we consider data from edge devices as big data

due to its diversity, scale, distribution, and velocity. However, the edge devices have limited re-

sources (example: CPU cycles, memory, and I/O data rate, battery power), they have to offload

tasks and corresponding data to the data centers or cloud [4], where we consider cloud has enough

resources for handling big-data from the edges. But, relying on data centers can make the perfor-

mance of delay sensitive and mission-critical applications worse, where these applications require

reliable computation and low-latency [3, 5].

To reduce delay, data and tasks exchange between edge devices and remote data centers, Eu-

1

Page 15: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 2

MEC SystemNSSF NRF NEFUDM PCF

AUSF AMF SMF PCF

UE RAN

APPs

VI

MEC Orchestrator

MP MPM

5G Network

Services

LA/DN

Sy

stem

lev

el

Ho

st Lev

el

Naf

N4

N6UPF

N9

NSSF: Network Slice Selection Function

NRF: Network Resource Function

UDM: Unified Data Management

PCF: Policy Control Function

NEF: Network Exposure Function

AUSF: Authentication Server Function

AMF: Access Management Functions

SMF: Session Management Function

UE: User Equipment

RAN: Radio Access Network

Naf: Service-based interface exhibited by AF

N4: Reference point between the SMF and

the UPF

UPF: User Plane Function

N9: Reference point between two UPFs

N6: Reference point between the UPF

and a Data Network

DN: Data Network

VI: Virtual Infrastructure

LA: Local Area

MP: MEC Platform

MPM: MEC Platform Manager

Figure 1.1: MEC deployment in 5G networks [12].

ropean Telecommunications Standards Institute (ETSI) introduced Multi-access Edge Computing

(MEC) at the edges of the wireless networks that supplement data centers/cloud computing [22].

At the edge, MEC provides both cloud computing and IT-based services. In other words, MEC

uses edge servers/ MEC servers for pushing Communication, Computation, Caching, and Control

(4C) near to the edge devices at the edge of the network [7]. Here, the MEC server is used to

execute delay sensitive and mission-critical applications near to the places where the data are gen-

erated and utilized. Typically, in this dissertation, we consider that MEC servers are implemented

at the Base Stations (BSs) of a wireless network. [8–10].

As shown in Fig. 1.1 and described in [12], MEC can be deployed in 5G networks for aligning

MEC system with network visualization and software define network approaches. Furthermore,

we consider tasks and data offloading to edge servers/MEC servers, with MEC servers collabo-

ration, can help in reducing extensive data exchange between edge devices and the remote data

centers. With MEC server, offloaded data can be preprocessed, analyzed, and stored in close prox-

imity to the edge devices, near where data is generated and utilized. However, to handle big data

from edge devices at MEC server involves to have big data platforms and edge analytics tool in

MEC server.

Page 16: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 3

1.2 Challenges of MEC in Dealing with Big Data

MEC as an evolution of cloud computing at the edge of the network, where it provides applica-

tion and IT-based services hosting closer to the users and things, is still facing some challenges

described below in dealing with big data from edge devices:

• Edge devices offload both tasks and data at MEC server with different rates, where data

arrive at MEC servers with variable flow sizes and need to be immediately processed for

meeting computation deadline such as live stream computation and real-time analytics [11].

It is challenging for MEC server to deal with such tasks which require loading data and

processing due to its timeliness, scale, and diversity. Therefore, to overcome such challenge

MEC servers must have big data platforms and applications that are able to perform tasks

and data possessing such split data volume, distribute computation tasks to various com-

puting nodes, replicate data partitions, and recover data. This can help the MEC server in

performing fast parallel and distributed computing [46].

• Comparing with cloud computing, MEC server has limited resources for handling big data at

the edge [13]. Consequently, if each MEC server works independently without collaborating

with other nearby MEC servers, it cannot handle big data stemming from edge devices,

i.e., reduce significantly delay and both tasks and data exchange between edges devices

and data centers. Therefore, to overcome this challenge, MEC servers available in same

area or nearby areas have to collaborate through sharing resources and optimizing resource

utilization [46].

• MEC server can be allowed to have access to radio network information in real-time, where

MEC and application providers can implement edge applications and services towards end-

users for dealing with delay sensitive and mission-critical applications [12]. However, co-

ordinating of MEC server in a mobile network environment is a challenging issue due to

control of both mobile network and MEC server’s services. Consequently, a new model that

joint communication, computing, caching, and control for collaborative MEC is needed.

Page 17: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 4

1.3 Contribution

To address the above-highlighted challenges, we propose a joint 4C model in collaborative MEC.

In this dissertation, our contributions are grouped into four categories: collaboration space in big

data MEC, joint communication, computation, caching, and control (4C) approach for MEC, and

caching for self-driving cars as an application of the 4C framework that depends on deep learning,

communication, computation, and caching resources. We summarize our contribution for each

category as follows:

1.3.1 Collaboration Space for Multi-access Edge Computing

To reduce both tasks and the data traffic between edges devices and data centers, i.e., backhaul

bandwidth consumption and minimize end-to-end delays, the MEC servers deployed in nearby

areas or same area should collaborate for sharing resources and optimizing resource utilization.

This will help MEC severs to handle big data offloaded from edge devices at the edge of the

network.

To satisfy edge devices’ demands for computer system resources through effectively comput-

ing tasks and data caching at the edge (at MEC servers), we propose a big-data MEC structure and

new approach for forming collaboration spaces of MEC servers through clustering MEC servers

in collaboration spaces. In our proposal, MEC servers of the same cluster/collaboration space col-

laborate for effective resource utilization. The key goal of the collaboration among MEC servers

is to minimize both backhaul bandwidth consumption and network delay while maximizing edge

resource utilization. Furthermore, in collaboration space, to reduce communication delay among

MEC servers, we propose Overlapping k-Means Method (OKM) for Collaboration Space (OKM-

CS). The OKM-CS is an application of OKM algorithm for unsupervised machine learning [27]

in MEC architecture. In OKM-CS, based on distance measurements and available resources, the

OKM-CS can allow an MEC server to be a member of one or more collaboration spaces.

Page 18: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 5

1.3.2 Joint 4C for Collaborative Multi-access Edge Computing

To offload data to MEC server for processing, analyzing, and caching in close proximity to where

data is produced and utilized, it requires to use communication resources. In addition, both net-

work and MEC servers’ services need to controlled and coordinated. Therefore, instead of taking

into account communication, computing, caching, and control independently, we need a model or

framework that joint communication, computing, caching, and control (4C) for effectively pro-

cessing and caching offloaded delay-sensitive and mission-critic data at the edge of the network,

which requires lower latency and efficient bandwidth utilization. Therefore, we propose a new

model that joint 4C in collaborative MEC. In our proposal, both big data computation and caching

functions are implemented at the edge MEC server rather than using remote data centers. This

enables MEC servers to use communication resource efficiency while reducing the network delay,

data and tasks exchange between edge devices and remote data centers.

For the proposed framework that joint 4C for collaborative MEC, to minimize network latency

and bandwidth consumption subject to the local computation capabilities of edge devices, compu-

tation deadlines, and MEC server’s communication, computing, and caching resource constraints,

we formulate an optimization problem. Although, the formulated optimization problem for joint

4C is non-convex and intractable. Therefore, we introduce a proximal upper-bound problem of the

formulated problem. However, to solve it, we need a distributed algorithm that enables to decom-

pose the proximal upper-bound problem into subproblems and separately handles each subprob-

lem. Therefore, to decompose our formulated problem, we chose Block Successive Upper-bound

Minimization (BSUM) over other distributed algorithms due to its scalability, simplicity, flexibil-

ity in parallel and distributed implementation [15]. Then, we use BSUM for solving the proximal

upper-bound problem, where BSUM is considered as an appropriate method for handling big data

optimization problems.

1.3.3 Deep Learning Based Caching for Self-Driving Cars in Multi-access Edge

Computing

We consider the self-driving car as an example of an edge device that generates huge data, where

the self-driving car needs to have various smart sensors for collecting heterogeneous data of vehi-

Page 19: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 6

cle’s occupants, pedestrians, different traffic conditions, and the environment in real time. The data

generated are considered as big data due to diversity, scale, distribution, and velocity. Therefore,

offloading these data to the remote cloud can consume backhaul bandwidth and cause high net-

work delay. Therefore, collaborative MEC with the joint 4C framework can support self-driving

cars to overcome these challenges.

In terms of downloading, once self-driving cars become involved in public transportation and

passengers become comfortable with them, self-driving cars will be new spaces for entertainment.

However, retrieving infotainment contents from the remote data centers can be experienced by

a high end-to-end delay. This will perturb the infotainment services, where passengers can ex-

perience lower Quality of Experience (QoE). Furthermore, in the self-driving car, driver will be

replaced by Artificial Intelligence (AI). Therefore, AI should be the empathetic companion of

the vehicle’s occupants to assist and provide personalized services to them such as infotainment

service. Furthermore, in the self-driving car, AI should be able to analyze and understand the

vehicle’s occupant features. Here, rather than using human-driven cars, we choose self-driving

cars because self-driving cars are already equipped with On-Board Units (OBUs) and Graphics

Processing Units (GPUs) that enable to deploy AI-based solutions easily.

To overcome these above-mentioned challenges, we proposed a new caching approach for

self-driving cars and MEC servers (attached to roadside units), where our caching approach is

based on deep-learning and 3C resources (communication, computation and caching). In our

approach, at Data Canter (DC), we propose to use Convolutional Neural Network (CNN) for

predicting passengers’ features and Multi-Layer Perceptron (MLP) for predicting contents that

will be cached in the areas of self-driving cars. Then, we deploy MLP outputs and CNN model

at MEC servers, where MEC server provides 3C resources. Based on a specific area, each car

requests and downloads MLP outputs and CNN model from the MEC server. Then, by using the

CNN model, the self-driving car can predict its passengers’ features and identifies the infotainment

contents that are appropriate to them using CNN and MLP outputs. Finally, we formulate an

optimization problem for deep learning based caching that aims to minimize infotainment content

downloading delay subject to 3C resource constraints. Then, we proposed a distributed control

optimization Algorithm for solving the formulated problem. This process makes our problem to

Page 20: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 7

feet into the joint 4C framework.

1.4 Related Works

There have been many related works for big data, caching, computation, communication, and

caching for cars. However, the problem of joint 4C in MEC has not so far tackled in the existing

related works. Therefore, we group these related works into six groups: (i) MEC system architec-

tures and standards, (ii) joint caching and communication, (iii) joint caching and big data, (iv) joint

computation and caching, (v) joint caching, computation, and communication, and (vi) caching

for cars.

1.4.1 MEC System Architectures and Standards

In December 2014, ETSI introduced Mobile Edge Computing (MEC) Industry Specification

Group (ISG) for accelerating and promoting edge computing in a mobile network. They developed

the MEC reference framework and architecture available in [22]. However, in September 2016,

the ETSI changes Mobile Edge Computing (MEC) to Multi-Access Edge Computing (MEC) to

include the non-cellular networks. Furthermore, the 3GPP has developed the requirements for

the 5G network, where the 5G network is considered as a new environment to deploy MEC in

Service-Based Architecture (SBA) [20]. In 3GPP, ETSI worked on MEC requirements for align-

ing MEC system with 5G network visualization and software-defined network approaches [20],

where it developed the MEC deployment proposal for 5G network [12]. Our proposed Big Data

MEC in [21] used the MEC system structure of ETSI MEC ISG [12, 22]. However, ETSI MEC

system structure in [22] was generic and did not specifically describe how MEC servers can handle

big data from edge devices. Therefore, to overcome this issue of ETSI MEC, we proposed Big

Data MEC architecture that can handle big data from edge devices and at the edge, the collabo-

ration between Big Data MEC, and optimization approach for jointly handling communication,

computation, caching, and control (4C) at the edge of the network. Moreover, we consider our

proposed Big Data MEC [21] does not contradict 3GPP and MEC requirements [20]. Our pro-

posed in fits into the MEC deployment proposal in the 5G network, where our Big Data MEC can

Page 21: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 8

be implemented at BS and collocated with User Plane Function (UPF) to provide communication,

computation, caching, and control at the edge near where data are generated and utilized.

To handle data from the edge at the edge, AT&T, Intel, and OpenStack in collaboration with

Linux Foundation introduced Akraino as an edge solution [23]. The Akraino Edge Stack is an open

source and community-based project for distributed edge platform [23]. Furthermore, Akraino

works with StarlingX of OpenStack to provide computation, storage, and control at the edge for

application services that require lower latency (less than 20ms) [24]. However, in their proposal,

there is no formulation on how to jointly optimize computation, storage, and control. In addition,

the technical requirement on how to deploy Akraino & StarlingX in the 5G network is still under

development. Comparing our proposal in [21] with OpenStack and Linux Foundation Proposal,

our proposal has more advantage because it is already based on ETSI MEC ISG [12, 22] require-

ments. Therefore, it can be easily implemented in the 5G network for handling joint communi-

cation, computation, caching, and control problem. Furthermore, Cisco introduced Multi-access

Edge Computing, where switching, computation, and storage are performed at the edge, while

control is performed at the core of the network [26]. Comparing our proposal in [21] with Cisco

proposal [26], in Cisco proposal, there is no joint optimization for 4C at the edge because control

is performed at the core of the network and not at the edge.

1.4.2 Joint Computation and Caching (2C)

In [19], the authors proposed a new approach that combines computation and caching at base sta-

tion for minimizing communication delay between a remote cloud and edge devices. Therefore,

the authors introduced a new algorithm for resource allocation that helps the BS to handle com-

putation offloading and data caching jointly. In [28], by using the online secretary framework,

the authors explored the idea of low-latency computations. In their proposal, computational tasks

need to be distributed among the edge networks and remote data center. Furthermore, to efficiently

utilize resource at the edge, i.e., at the BS level, the authors in [29] introduced a new approach for

video caching and processing through collaboration between edge servers/MEC servers attached

to BSs. In their proposal, based on available resources and demands, edge servers can support

each other. The authors formulated an optimization problem that joint caching and processing

Page 22: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 9

problem for minimizing the backhaul network cost. Moreover, the authors in [30] proposed a

new framework that handles BSs placement and mobility-aware caching jointly. In addition, they

highlighted the relationships and differences between computation offloading and caching. Fur-

thermore, at WiFi Access Points (APs), the authors in [31] introduced a caching approach which

is based on content prefetching. The proposed caching approach uses network-level mobility pre-

diction and aggregated network-level statistics, where the users can retrieve cached contents at

APs level rather than retrieving contents from content providers. This helps in saving backhaul

bandwidth. Furthermore, in order to handle mobile flashcrowd demands for contents, in [32], the

author introduced a new proactive caching approach which is based on mobility prediction for

effectively prefetching contents in small cells.

1.4.3 Joint Big Data and Caching

The authors in [16] introduced a new big data caching approach that optimizes the mobile network

through utilizing both users’ features and network’s features. In addition, the authors in [17] intro-

duced a new content caching approach in mobile networks which is based on centrality measure-

ment. However, the proposed big data approach implementation at the edge of the mobile network

can raise certain challenges because comparing with the remote cloud, caching spaces at the edge

are limited and sometimes small, which may possibility cause lower cache hit ratio. Therefore, to

address this issue, the authors in [9] expressed the necessity of having cooperative caching at the

edge of the network for low latency content delivery. Furthermore, the authors in [3] established

the connections between caching and big data in 5G wireless networks. In their proposal, the au-

thors use a machine learning approach for predicting the popularity of the contents. Furthermore,

more machine learning methods that can be applied are examined and surveyed in [18].

1.4.4 Joint Communication and Caching (2C)

To enhance content delivery service through minimizing the repetitive same data transmissions,

the authors in [33] expressed the necessity of having a content caching approach that can efficiently

prevent redundant data transmission. Therefore, they introduced both content delivery policy and

cooperative caching approach. In their proposal, they used both user equipment and femtocell

Page 23: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 10

BSs for caching contents. Furthermore, in radio access networks (RANs), the authors in [34]

introduced a collaborative caching approach that uses device-to-device (D2D) communication for

caching contents in RANs. They formulated a joint resource allocation and content caching as

an optimization problem. Moreover, to address communication and caching jointly, the authors in

[35] also introduced a joint communication and caching in heterogeneous cellular networks, where

D2D communication is considered. Therefore, to meet users’ requirements in terms of Quality-of-

Service (QoS), the authors formulated a joint communication and caching to be an optimization

problem that maximizes the network throughput through using an effective bandwidth resource

allocation approach. Furthermore, in drone-enabled environment, the authors in [36] also studied

the problem of joint communication and caching.

1.4.5 Joint Communication, Computation, and Caching (3C)

The authors in [37] joined communication, computation, and caching approach in a heteroge-

neous network which is information-centric. In their proposal, authors considered a virtualized

environment of MEC for communication, computing, and caching resources, where resources are

allocated to multiple end-users of virtual service providers. Furthermore, for improving compu-

tational resources utilization at the edge nodes, the authors in [38] formulated an optimization

problem for computation offloading, data caching, and resource allocation framework that con-

siders the total revenue of the network. Moreover, the authors in [39] introduced a new approach

that deals with communication, caching and computing demands for meeting the requirements of

the next generation green wireless networks. In addition, the author in [40] introduced VR/AR

as realistic applications at MEC, where a joint caching, computing, and communication approach

is utilized. Furthermore, the authors in [41] introduced a joint offloading and caching approach

that enables task uploading and executing at the edge of MEC architecture. Furthermore, more 3C

approaches are examined in [42–45].

1.4.6 Caching for Cars

Content caching at routers, Macro Base Stations (BSs), and RSUs have extensively studied in

[47–49, 64]. However, few authors studied content in-car caching. Therefore, the authors in [50]

Page 24: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 11

introduced an auto-control system that studies the features of a vehicle’s occupants through listen-

ing to conversations between passengers. This helps the vehicle to understand the atmosphere or

ambiance inside the vehicle and trace the relationship between passengers. Through this, the vehi-

cle can identify and deliver appropriate infotainment contents to the vehicle’s occupants. Further-

more, the authors in [52] introduced a cloud-based vehicular ad-hoc network. In their proposal,

vehicles and RSUs cache contents and deliver cached contents on demands. However, using a

cloud-based controller can increase the required delay for retrieving contents due to caching con-

trol process. Furthermore, the authors in [53] used the new caching approach, where contents are

cached at the edge servers (BSs) and in autonomous vehicles. In the proposed caching approach,

the edge server selects and caches contents to the vehicles that have high power or position in shar-

ing cached contents with other vehicles. However, in a practical vehicle environment, vehicles and

edge servers are from different owners, and there is no incentive mechanism for vehicles’ owners

to authorize edge server’s operator to cache contents in their vehicles and get involved in content

distribution. Finally, in [51], the authors proposed a method for caching in an autonomous car.

In their proposal, autonomous vehicles have cache storages for caching the data collected by their

sensors, including metadata related to driving decisions. Therefore, from the cache storage, it is

possible to generate a driving decision based on similar previous cached driving decisions.

To this end, the novelties of this dissertation over above-related works include: (i) Based

on distance measurements and availability of resources, we propose collaboration among MEC

servers in which MEC servers are organized in clusters or collaboration spaces using OKM-CS

algorithm. This approach is new and not yet applied in MEC before, (ii) The works in [37–41]

studied joint 2C and 3C, while in this dissertation, we propose joint 4C framework for collab-

orative MEC that considers the computation resource and energy of edge device, computation

deadline, the size of input data, and MEC 3C resources (communication, computing, and caching)

constraints, (iii) To solve our optimization problem for a joint 4C framework for collaborative

MEC, we utilize the BSUM approach, where BSUM is not yet applied in MEC scenarios for

a joint 4C. The BSUM approach is a new powerful distributed algorithm in big-data optimiza-

tion [15]. Here, BSUM is used to decompose our optimization problem into subproblems, where

each subproblem can be separately handled, (iv) As an application of 4C, we propose a novel

Page 25: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 1. INTRODUCTION 12

caching approach for the self-driving car through using passengers’ features and availability of 3C

resources in caching decision, which is the new over other existing caching approaches in vehicles

and wireless networks discussed in [45–49, 52, 53, 64, 95].

1.5 Thesis Outline

The rest of this dissertation is structured as follows: In Chapter 2, we discuss the proposed ap-

proach for forming collaborative spaces of MEC servers for handling big data from edge devices.

In Chapter 3, we present in detail our joint 4C framework for collaborative MEC, while Chapter

4 discusses an application of 4C for deep learning based caching in self-driving cars. Finally, we

end this thesis with a conclusion and future directions in Chapter 5.

Page 26: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Chapter 2Collaborative Multi-access Edge Computing

2.1 Overview

Multi-access Edge Computing (MEC) was introduced to provide cloud computing capabilities at

the edges of the networks by deploying MEC servers near to the edge devices, where the data

is generated and utilized. In this dissertation, we consider that MEC servers are implemented

in a wireless network, specifically at the Base Stations (BSs), for handling delay sensitive and

mission-critical application and performs edge analytics. Although, due to the increase in the

number of edge devices and applications, where not only the people who generate data but also

things/machines. Therefore, it is challenging for MEC server to handle such data, when each MEC

server operates without collaborating with other nearby MEC servers. In order to overcome this

challenge, we need a collaboration space or cluster for MEC servers that can support low latency

services and minimize backhaul bandwidth consumption. To achieve this, we need to address the

following question: Based on distance measurement and available resources, how MEC servers

can form a collaboration space?

In this chapter, to answer the above question, we propose an MEC servers-based clustering

called collaboration space. In our proposal, based on distance measurement and available re-

sources, MEC servers of the same cluster or collaboration space work jointly through sharing

information, tasks, and data. The simulation results demonstrate that our proposal performs well

over elbow method in terms of choosing the number of collaboration spaces that reduces commu-

nication delay among MEC servers of the same collaboration space.

13

Page 27: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 14

2.2 Background and Contributions

Comparing with cloud computing, MEC servers have limited resources. This results in a contin-

uing increase of data traffic between edge devices and data centers which require high end-to-end

delay and high backhaul bandwidth consumption. Therefore, to minimize end-to-end delay back-

haul bandwidth consumption, we need collaboration of MEC servers at the edges.

In this research topic, we propose collaboration space in big data multi-access edge computing.

We aim to specify the method for forming a collaboration that allow the MEC server to collaborate

with each other through sharing resources utilization information, tasks, and data that help in han-

dling big data demands stemming from edge devices, reducing significantly data traffic between

edge devices and data centers as well as reducing end-to-end delay. The key contributions in this

chapter are recapitulated as follows:

• To satisfy edge devices’ demands for computer and caching resources, and efficiently exe-

cuting computational tasks and performing data caching at the edge, we propose a big-data

MEC sever structure and MEC servers-based clustering called collaboration space. In our

proposal, MEC servers of the same cluster/collaboration space support each other through

working jointly. The objective of this collaboration among MEC servers is to minimize

backhaul bandwidth utilization and end-to-end delay while maximizing edge resource uti-

lization (at MEC servers).

• To reduce communication delay and enable collaboration between MEC server, we propose

Overlapping k-Means Method (OKM) for Collaboration Space (OKM-CS). The OKM-CS

uses the original OKM algorithm for unsupervised machine learning [27] in MEC architec-

ture. In OKM-CS, based on distance measurements and available resources, the OKM-CS

can allow each MEC server to be in one or more clusters/collaboration spaces.

2.3 System Model

As shown in Fig. 2.1, we use an MEC network with M MEC servers. We useM to denote a set

of MEC servers, where each MEC server m ∈ M is deployed at BS. In this dissertation, unless

Page 28: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 15

MEC server

MEC server

Data center

MEC server

MEC server

M

Physical resources in

collaboration space

Virtual resources

Fiber link

X2

Wireless channel

Figure 2.1: System model for collaborative MEC.

stated otherwise, we utilize the terms “MEC server” and “BS” interchangeably.

In our system model, each MEC serverm ∈Mworks jointly with other MEC servers through

sharing tasks, data, and resource utilization information. Therefore, BSs are groped into clusters

(i.e.,collaboration spaces ). In this dissertation, unless stated otherwise, the terms “cluster” and

“collaboration space” have the same meaning. Moreover, to reduce communication delay, our

grouping process of BSs into collaboration space depends on proximity measurements. In other

words, BSs deployed in same area or very close areas are grouped in the same collaboration space.

Furthermore, rather than only focusing on geographical space partitioning, our collaboration space

formation also focuses on geographic space coverage. As an illustrative example, let us assume

that some MEC servers located in one hotspot area want to work jointly with other MEC servers

located in another hotspot. Therefore, to adopt such collaboration, we propose to use an over-

lapping clustering approach that can permit one MEC server/BS to be in one or more clusters

for sharing resources (communication, computing, and storage resources) based on distance or

proximity measurements and available resources.

Page 29: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 16

In our system model, we consider that each MEC server m ∈ M is equipped with both

computational and caching resources, where resources are divisible for being allocated to edge

devices on demands. We use Pm to denote computational capacity and Cm to denote the cache

capacity of MEC server m. Furthermore, to utilize the MEC server’s resource effectively and

allocate resources to multiple edge devices, we assume that the MEC servers’ resources are sliced.

In each collaboration space, based on available resources, MEC servers can share tasks, data, and

resource utilization information. Furthermore, within a collaboration space, we assume that the

MEC servers belonged to the same operator, i.e., Mobile Network Operator (MNO). In addition,

we consider that MNO’s network is equipped with a total cache storage capacity denoted C where

C =∑

m∈MCm, and a total computation capacity denote P where P =∑

m∈M Pm.

For sharing resources within the collaboration space, each MEC server m is equipped with a

Resource Allocation Table (RAT) that helps the MEC server to keep tracking available resources

such as CPU, RAM, and cache storage utilization. Therefore, to ease communication among MEC

servers for sharing tasks and corresponding data within the collaboration space, the MEC servers

share RAT updates, i.e., resource utilization information. However, in collaboration space, when

the resources are not enough to satisfy the edge devices’ demands, MEC server m forwards the

demands to the Data Center (DC) via wired backhaul link.

In our system model, we use K as a set of K edge devices, where each edge device k ∈ K

has access to the cellular network via its nearest BS defined as “home BS”, i.e., based on wireless

signal strength. Furthermore, we use Km to denote a set of edge devices connected to the BS

m ∈ M , where Km ⊂ K. In addition, for computation and caching, we consider that the edge

devices are characterized by limited resources. Therefore, based on demands from edge devices

that reach in collaboration space, each MEC serverm can serve computation and storage resources

to the edge devices rather than sending demands to DC. As an illustrative example, let us consider

utilization of drones in sports activities, where the drone can cover sports scenes. Then, it sends

live stream videos to near MEC server m, where MEC server m can do live stream processing,

caching, and distributing. Furthermore, by considering both demands and network conditions, the

cached data can be retrieved as is or processed (e.g., video transcoding), where MEC server can

return processed data to the edge devices.

Page 30: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 17

ü Offloading

Big Data Distributed File System

Big Data Analytics Platforms

Raw Data Preprocessing Data cleansing Feature Extraction

Statistical Modeling Data Mining Artificial Intelligent

Big Data Computation Output

Big Data MEC: Virtualize 3C Resources

ü Downloading

Applications

Delay Sensitive Mission Critical Location Aware

Ed

ge

De

vic

es

Figure 2.2: Big Data MEC.

We consider that each edge device k ∈ K is equipped with an application that wants to utilize

computation and/ caching resources (e.g., mixed reality, crowdsensing, online gaming). However,

if the edge device does not have enough resource, it can offload its task to MEC server. Here, we

use a binary task offloading approach, where a task from edge device is a single entity and it can

be either computed at the edge device, i.e., locally or offloaded to the nearest MEC server attached

to its home BS. ∀k ∈ K, we use Tk to denote task from edge device k, where Tk = (s(dk), τk, zk).

In Tk, we use s(dk) to denote the size of input data dk (input of computation) from edge device k,

where dk is measured in terms of the bits. Furthermore, we use τk to denote computation deadline

of task Tk, while zk is used to denote computation workload, where zk is measured in terms of

CPU cycles per bit. Moreover, we consider that the demands from different edge devices are not

dependent.

In this dissertation, to handle big data demands stemming from edge devices, as illustrated

in Fig 2.2, we propose a big data MEC architecture that is able to support the big data cloud re-

quirements described [55], in which included but not limited to (i) Easy implementation of virtual

Page 31: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 18

machines, file systems, big data platforms, and applications such as Spark, Storm, Hadoop, and

Splunk; (ii) Efficient management of 3C resources (communication, computation, and caching) on

both physical and virtual environments; (iii) Scalability and elasticity in 3C resources allocation;

(iv) Easy deployment and utilization of big data platforms and applications with fast access to

data and 3C resources; and (v) Handle multi-dimensional data processing in which data of various

characteristics and forms arrives at MEC server for computation and caching.

2.4 Collaboration Space in Multi-access Edge Computing

2.4.1 Collaboration Space Formation

In this subsection, we describe in detail how the MEC servers can form a collaboration space based

on distance measurement and available resources. We remind that the main goals of collaboration

space are to handle at the edge the big data generated by edge devices, minimize extensive data

and task traffic between edge devices and data centers. This has an impact of reducing end-to-end

delay, i.e., improve Quality of Experience of edge devices in terms of computation and retrieving

contents.

To form a collaboration space, for a give setM ofM MEC servers, we aim to group them into

r collaboration spaces or clusters based on proximity measurement and available resources. Thus,

our approach for forming collaboration spaces put more emphasis geographic space coverage, we

assume that the MNO chooses the value of r based on the size of its network topology. In other

words, the size of network has an impact in the choice of values for r. To achieve this, other

methods for determining the value of r such as elbow method described in [76] is not applicable

in our scenario.

To form collaboration spaces or clusters of MEC servers, we introduce OKM-CS algorithm,

which is an implementation of the original Overlapping k-Means Method (OKM) algorithm [27]

in MEC scenario. We selected OKM algorithm over other clustering algorithms such as Multi-

Cluster Overlapping k-Means Extension (MCOKE), Weighted OKM (WOKM), and Overlapping

Partitioning Cluster (OPC) [54] because of its simplicity in implementation over other overlapping

methods. In this subsection, we provide the mathematic formulation of OKM-CS, while the pseu-

Page 32: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 19

docode of OKM-CS will be provided and described in the next subsection. The OKM-CS groups

the BSs into r clusters by minimizing the proximity measurement described in the following ob-

jective function:

minimize I(Miri=1) =r∑i=1

∑m∈Mi

‖m− Φ(m)‖2, (2.1)

where Mi ⊂ M is the ith collaboration space. In addition, as described in [27], Φ(m) is the

average of centroids (mci) of the collaboration spaces for each BSm, and Φ(m) is mathematically

described as follows:

Φ(m) =

∑mci∈A

mimci

|Ami |, (2.2)

where Ami determines multiple assignment for each BS m: mci |m ∈ Mi. In other words, Amiis used to denote a set of all centroids mci such that m ∈ Mi. In the context of our proposal,

i.e., in OKM-CS, we define a centroid BS as a BS that is the center of each collaboration space,

where centroid BS has to be unique in each collaboration space. Moreover, to ensure that the MEC

servers of different collaboration spaces may collaborate, we need to form overlapping collabo-

ration space, where each BS m can be a member of one or multiple collaboration spaces, where⋃ri=1Mi = M represents the total coverage of the MNO’s network. Here, BSs are associated

to the centroids using geographical locations. On the other hands, the overlapping collaboration

spaces enable the MEC servers to be a member of one or more collaboration space so as to collab-

orate and share information on 3C resource utilization.

In OKM algorithm [27], the number of collaboration spaces, i.e., clusters r is chosen randomly.

However, in this dissertation, before forming the collaboration spaces, we assume that the MNO

knows its network topology and the network topology does not change frequently. Therefore,

the MNO chooses the number of clusters r based on its network topology. However, when the

topology of the MNO changes, the OKM-CS needs to be executed again for updating the formation

of the collaboration spaces and their associated centroids. Furthermore, in this dissertation, we

first choose to perform clustering for making collaboration spaces. Then, based on collaboration

spaces, we can formulate the optimization problem for 4C described in Chapter 3. If we attempt to

Page 33: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 20

Algorithm 1 : Overlapping k-Means Method for Collaboration Space (OKM-CS)1: Input:M: A set of BSs, ε > 0, tm: Maximum number of iterations;2: Output: M(t+1)

i ri=1 : Final collaboration spaces of BSs;3: Choose r as initial number of collaboration spaces with m(0)

ci ri=1 centroid;4: For each BS m, compute the assignmentAm(0)i by assigning bs m to centroid m(0)

ci ri=1, and derive initial coverage M(0)i ri=1, such

thatM(0)i = m|m(0)

ci ∈ Am(0)i ;

5: Initialize t = 0;6: For each collaboration space M(t)

i , compute the new centroid m(t+1)ci by grouping nearby

M(t)i ;

7: For each BS m and assignment Am(t)i , compute new assignment Am(t+1)

i by assigning bs mto centroid m(t+1)

ci ri=1 and derive new coverage M(t+1)i ri=1;

8: when the equation (2.1) does not converge I(M(t)i ri=1)− I(M(t+1)

i ri=1) > ε or tm > t,set t = t + 1, restart from Step 6. Otherwise, stop and consider M(t+1)

i ri=1 as the finalcollaboration spaces.

solve jointly the clustering and optimization problems, the system will no longer be able to meet

the computation deadlines because of high computation time of clustering.

2.4.2 Overlapping k-Means Method for Collaboration Space (OKM-CS)

Our OKM-CS is depicted in Algorithm 1. First, for a given set of MEC servers and their

associated coordinates, the OKM-CS begins with an initial r for computing collaboration spaces

or clusters and corresponding centroids m(0)ci ri=1. From the initial collaboration spaces and cen-

troids, the algorithm finds new coverage M(0)i ri=1. Through iterations, the OKM-CS computes

new assignments of MEC servers and corresponding centroids m(t+1)ci ri=1 in which produces a

new coverage M(t+1)i ri=1. The iteration goes on until the convergence criterion is reached, i.e.,

when I(M(t)i ri=1)− I(M(t+1)

i ri=1) < ε. Here, we consider ε to be a small positive number.

Thus, we focus on the cooperation of the MEC servers of same collaboration space, for the brevity

of our approach, we leave out the subscript i onMi and continue the study the cooperation within

one collaboration space.

To allocate 3C resources of MEC servers to edge devices, we consider that each edge device

k that needs resource submits its task Tk and corresponding data dk to its nearest MEC server m

attached to its BS m as demand. On receiving demand Tk, the MEC servers match the demands

Page 34: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 21

Table 2.1: The choice of r for making the collaboration spaces.Number of BSs r = 100 [76] r = 500 r = 1000 r = 2000

Maximum 1299 374 200 143

Minimum 12 1 1 1

Average 128 25 13 6

with a resource for satisfying each edge device k requirement. To help edge devices for preparing

their demands and requirements, we consider that the MNO announces its total available resources.

However, in sharing this information, the MNO does not share the information of each edge device

k. Furthermore, for each edge device k and MEC server m, vkm(cdk, pkm, Rmk ) is used to denote

the resource allocation function. In vkm(cdk, pkm, Rmk ), cdk represents caching resource allocation

for data of size s(dk) from edge device k, where cdk = s(dk). On the other hand, pkm represents

the computational resource allocation, while Rmk is used to define the communication resource

allocated to each edge device k at MEC server m.

In a collaboration space, the MNO assigns 3C resources to edge devices based on weighted

proportional allocation [56]. We choose weighted proportional allocation over other approaches

because weighted proportional allocation is practical and has already utilized and excel in existing

realistic cellular networks such as 3G, 4G, and 5G [58, 108]. In weighted proportional allocation,

based on demands and available resources, each edge device k gets a fraction of the available

resources at the MEC server m. Furthermore, in this dissertation, if both τk = 0 and zk = 0,

MEC server consider that the edge device demands communication resources to offload data dk

and caching resources to cache offloaded data. In such a scenario, an MEC server caches offloaded

data dk, and waits for the later request(s) of cached data dk. Furthermore, cached dk can be served

on demand as is or after being processed. On the other hands, if s(dk) 6= 0, τk 6= 0, and zk 6= 0,

the MEC server computes task using offloaded data dk as inputs, caches the output data of dk after

computation, and sends the computation output to edge device k using wireless channel.

Page 35: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 22

Figure 2.3: Collaboration space formation (r = 100) using elbow method [76].

Figure 2.4: Collaboration space formation (r = 500) using OKM-CS.

2.5 Simulation Results and Analysis

To make collaboration spaces of MEC servers, we use Python [73] as programming language, pan-

das [74] for data analysis, and Sitefinder dataset [75] for BSs. The Sitefinder dataset is a database

of BSs published by Edinburgh DataShare and available for the public in [75]. In this dataset, one

MNO is chosen randomly. For a given set of 12777 BSs in chosen MNO, by using the OKM-CS

algorithm, we classify those BSs into clusters. With different values of r, we summarized the

number of BSs in each collaboration space in Table 2.4.2. First, as illustrated in Fig. 2.3, we apply

the elbow method [76] to find an optimal number of collaboration r, where r = 100 is the optimal.

Page 36: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 23

Figure 2.5: Collaboration space formation (r = 1000) using OKM-CS.

Figure 2.6: Collaboration space formation (r = 2000) using OKM-CS.

Furthermore, for r = 100, the collaboration spaces formation costs 343.76 seconds. In addition,

for r = 100, we realized that we have many collaboration spaces that have many BSs in each one

collaboration space. This creates communication overhead in terms of delay for MEC servers of

the same collaboration space. From this, we can confirm that the elbow method [76] is not suitable

for forming collaboration space.

To overcome the above-highlighted challenge, as shown in Fig. 2.4, we set r = 500. However,

this still causes our approach to have many BSs per each collaboration space. Furthermore, as

shown in Fig. 2.5, we increase the value of r up to r = 1000. This results in finding that each

collaboration space has an average number of BS equals to 13 BSs, which is a suitable value of

Page 37: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 2. COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 24

r for the used network topology to evaluate our proposal. In other words, in the dataset, for a

given set of 12777 BSs of MNO, we find r = 1000 to be a reasonable choice of the value of r

because it gives an acceptable number of BSs in each collaboration space. Here, for MNO, the

choose r value should depend on network size, where dissimilar networks need to have different

r values. However, for the used network topology, when the values of r become larger than 1000,

the system has many small collaboration spaces which have only one BS as a drawback. As an

illustrative example is shown in Fig. 2.6 for r = 2000.

In 1000 collaboration spaces formed using OKM-CS algorithm, we randomly choose one

collaboration space of 13 BSs. Then, we attach an MEC server to each BS. Moreover, in a col-

laboration space, we use K = 100 as an initial number of edge devices connected to each BS m.

Then, we exponentially augment the number of edge devices to K = 3200. For satisfying edge

devices’ demands, at each time slot, we use binary offloading, where each edge device offloads

one task.

2.6 Summary

In this chapter, we have presented our Overlapping k-Means method for creating collaboration

spaces of MEC servers. The proposed overlapping k-Means method for collaboration space

(OKM-CS) is based on standard OKM algorithm. The proposed collaboration space has advan-

tages of reducing data exchange between edge devices and remote clouds. With collaboration

spaces, the MEC servers of the same collaboration space can cooperate through sharing tasks,

data, and 3C resource utilization information. This helps in dealing with big-data from edge de-

vices at the edge and providing computer system resources in close proximity to edge devices

and minimizing the end-to-end delay. For choosing the number of collaboration spaces, we have

compared our approach, which is based on network topology, with elbow method, the simulation

results show that our proposal performs well over elbow method in terms of choosing the reason-

able value of r that reduces communication delay between MEC servers of the same collaboration

space.

Page 38: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Chapter 3Joint 4C in Collaborative Multi-access Edge Computing

3.1 Overview

After proposing the new approach for forming collaboration space of big-data MEC in the Chapter

2, in this chapter, we need to address the following question: how to use effectively the communi-

cation, computation, and caching resources of MEC servers in collaboration space for minimizing

communication delay and backhaul bandwidth consumption?

To answer the above question, we propose a joint communication, computation, caching, and

control (4C) in collaborative big-data MEC that aims to satisfy edge devices’ demands for commu-

nication, computation, and caching resources (3C) of MEC servers. In other words, for coordinat-

ing the utilization of joint 3C resources, we propose a control model to make 4C which is based on

a distributed optimization problem. We formulate an optimization problem for joint 4C in collabo-

rative MEC which jointly minimizes both backhaul bandwidth consumption and network latency.

Thus, the formulated problem is non-convex and intractable, we propose a proximal upper bound

problem of the formulated problem. Then, we apply block successive upper bound minimization

method for solving it. We choose block successive upper bound minimization method over other

techniques because it allows to split the problem into subproblems, where each subproblem can

be separately addressed. The numerical results of our proposed scheme show that our joint 4C ap-

proach has a good performance in meeting computation deadlines, and minimizing both network

latency and backhaul bandwidth consumption.

25

Page 39: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 26

3.2 Background and Contributions

3.2.1 Background

Multi-access Edge Computing (MEC) is considered among the key technologies that can enable

fifth generation (5G) cellular networks to fulfill its requirements in terms of lower latency and

efficiency bandwidth utilization. In addition, both MEC and 5G shares the same specifications

such as services based on interactions among various network functions, supporting software-

defined network, and network virtualization [12]. In addition, the MEC servers should be equipped

with big data platforms and applications that can handle big-data from edge devices, where the

edge devices obtain 3C resources in nearby MEC servers. Furthermore, for efficient utilization of

3C resources, minimizing communication delay, and data traffic between edge devices and data

centers, i.e., reducing backhaul bandwidth consumption, the MEC servers deployed in the same

area or very close areas should collaborate for sharing resources [46].

Caching offloaded data at the edge for later use, which is existing related works [64] [36], can

help in reducing the end-to-end delay by minimizing data from the edge devices to the remote

clouds. However, talking caching without considering the computation of cache data can result

in lower cache hits. Therefore, to overcome this challenge, on requests, cached data can be com-

puted to various formats with different qualities using computing resources of MEC servers. Still,

offloading data from edge devices to MEC servers for computation and caching require communi-

cation resources. If the cost of communication is high, the edge devices will end-up not offloading

data to MEC servers. Therefore, to overcome these challenges, a new approach for 3C is needed.

In addition, for coordinating 3C resources utilization, the control model is needed.

3.2.2 Contribution

We consider massive data from edge devices, where offloading massive data to MEC servers for

processing, analyzing, and caching in close proximity to the edge devices requires communication

resources. Therefore, we propose a joint communication, computation, caching, and control (4C)

framework that is suitable for handling the massive data of both delay sensitive and mission-critic

applications at the edge, which require lower latency and efficient bandwidth utilization. The main

Page 40: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 27

contribution of this chapter is recapitulated as follows:

• First, we propose a new approach that joint 4C in collaborative MEC. Here, the tasks and

corresponding data are offloaded using communication resources to nearby MEC server

rather than sending them to the remote data centers, where MEC server performs compu-

tation and caching functions. This allows to efficiently use communication resource and

minimize backhaul bandwidth consumption. This will have a positive impact on reducing

communication delay and data traffic between edge devices and data centers.

• Second, we formulate the new proposed approach as an optimization problem that min-

imizes both network latency and backhaul bandwidth consumption, subject to the local

computation resources of the edge devices, computation deadlines, and MEC server’s 3C

resources constraints. But, our optimization problem is non-convex and intractable. There-

fore, to handle it, we propose a proximal upper-bound problem of the formulated problem,

which is convex. Then, use Block Successive Upper-bound Minimization (BSUM) to de-

compose the formulated problem into subproblems, where each subproblem can be sepa-

rately addressed [15].

3.3 System Model for Joint 4C

In this subsection, we present a joint 4C framework for collaborative MEC. At an MEC server,

we consider that 3C (Communication, Computation, Caching ) resources are virtualized and al-

located to multiple edge devices on demands. Furthermore, in collaboration space, the demands

for 3C resources which are not being served by one MEC server, they can be served by any other

MEC server which has enough resources. Therefore, we present distributed control model for

coordinating 3C resources utilization in the collaboration space.

3.3.1 Communication Model

In our communication model illustrated in Fig. 3.1 and explained below, we consider three sce-

narios for handling the offloaded task and corresponding data from edge device to the MEC server

which requires a communication cost (bandwidth cost).

Page 41: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 28

MEC Server m

MEC Server m

Edge Device k

Edge Device k

Wireless Channel

Wireless Channel

MEC Server n

(a)

(b)

MEC Server MEC ServerEC S m

MEC Server m

Edge Device k

Edge Device k

Wireless Channel

MEECC ServerEEECCC S

Wireless Channel

MMMEC Server MEC Serve n

(a)

(b)

Edge Device k

(c)

MEC Server m

Wireless ChannelData

Center

a

errrrrrrrrrrr

Resource Miss

Resource Hit

Figure 3.1: Illustration of collaboration space with three communication scenarios (a), (b), and (c)(Ref. 3.3.1).

Scenario (a): We consider edge device k ∈ K can gets 3C resources from nearest MEC server

attached to its home BS m ∈ M using a wireless channel. To model this communication, we

define a computation offloading decision variable xmk ∈ 0, 1, where xmk ∈ 0, 1 specifies

whether or not edge device k offloads Tk to its nearest MEC server m via a wireless channel.

xmk =

1, if task Tk from edge device k is offloaded to BS m,

0, otherwise.(3.1)

In addition, for each edge device k, we calculate the spectrum efficiency [37] as follows:

γmk = log2

(1 +

ρk|Gmk |2

σ2k

), ∀k ∈ K, m ∈M. (3.2)

In the above equation (3.2), we use ρk to denote the transmission power for edge device k, and

|Gmk |2 to denote the channel gain between BS m and edge device k. In addition, we use σ2k to

denote the power of the Gaussian noise at edge device k.

Page 42: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 29

From (3.2), we can calculate the data rate for each edge device k as follows:

Rmk = xmk amk Bmγ

mk ,∀k ∈ K, m ∈M. (3.3)

At BS m, we consider that each edge device k is assigned a fraction amk (0 ≤ amk ≤ 1) of

bandwidth Bm. In addition, to ensure that there is no interference among the edge devices, we

consider that the spectrum at BSm ∈M is orthogonal, where demand for offloading is considered

at BS m ∈M when there is spectrum resource to serve the edge device’s demand.

By considering the data rate calculated in (3.3), to offload a task Tk from edge device k to the

nearby BS m, we can calculate the transmission delay as follows:

τk→mk =xmk s(dk)

Rmk, ∀k ∈ Km, (3.4)

where we use Km to denote a set of edge devices that are connected to BS m.

Scenario (b): In this scenario, if MEC server attached to BSm does not have enough resources

to serve the demand of edge device k ∈ K, it checks its Resource Allocation Table (RAT) for

finding another BS n in the collaboration space that can serve the demand from edge device k ∈ K,

i.e., another MEC server n that has enough resources. Then, BSm sends the demand to BS n using

X2 link [60]. In other words, edge devices can obtain 3C resources from various BSs/MEC servers

on different communication costs such as delays and bandwidth consumption.

For offloading task, we use ym→nk to define the following decision variable that specifies

whether or not an offloaded task Tk from edge device k is forwarded from BS m to nearby BS n

in collaboration space:

ym→nk =

1, if an offloaded task Tk from edge device k is forwarded from BS m

to a nearest neighbor BS n,

0, otherwise.

(3.5)

Furthermore, we use τm→nk to denote the offloading delay between two neighboring BSs: BS m

Page 43: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 30

and BS n, where τm→nk is calculated as follows:

τm→nk =

∑k∈Km

ym→nk s(dk)

Γnm, ∀m, n ∈M, (3.6)

where Γnm is used to denote the capacity of the X2 link between BS m and BS n of the same

collaboration space.

Scenario (c): Here, we consider the scenario that there are no available resources in collabora-

tion space to satisfy the edge device’s demand. In such a situation, BS m sends the edge device’s

demand to the remote data center via a wired backhaul link between BS m and data center (DC).

Therefore, we use ym→DCk to denote a decision variable that specifies whether or not the offloaded

task Tk from edge device k is forwarded from BS m to the DC, where ym→DCk is given by:

ym→DCk =

1, if offloaded Tk from edge device k is forwarded from BS m to the DC,

0, otherwise.(3.7)

In addition, we use τm→DCk to denote the offloading delay between BS m and DC for offloaded

Tk from edge device k, where τm→DCk is described mathematically as follows:

τm→DCk =

∑k∈Km

ym→DCk s(dk)

ΩDCm

, ∀m, n ∈M, (3.8)

where ΩDCm is used to denote the capacity of the link between MEC server m and remote DC.

3.3.2 Computation Model

In our computation model, we use two sub-models. The first sub-model is related to local compu-

tation at edge devices, while the second sub-model is related to computation at MEC servers.

3.3.2.1 Local Computation Model at Edge Device

For computing edge device’s task Tk requires CPU energy Ek and computation resource Pk in

terms of CPU clock frequency. The CPU energy consumption can be mathematically expressed

Page 44: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 31

as follows:

Ek = s(dk)νzkP2k , k ∈ K, (3.9)

where ν is CPU constant parameter, which depends on CPU hardware architecture. Furthermore,

to execute task Tk at edge device k ∈ K takes time, where execution time lk can be expressed as

follows:

lk =s(dk)zkPk

. (3.10)

However, when the edge device k does not have enough computation and energy resources to

execute task Tk and meet the computation deadline τk (zk > Pk, lk > τk, and/or Ek > Ek), the

edge device k can hold on to execute its task Tk until the required resources become available

for executing its task Tk locally. Here, we use Ek to denote the available energy resource at

the edge device k ∈ K. Therefore, to incorporate holding time in local computation model for

executing task Tk, we define a status parameter αk ∈ 0, 1 for the edge device k , where αk can

be expressed as follows:

αk =

0, if zk > Pk, or lk > τk, or Ek > Ek,

1, otherwise.(3.11)

Furthermore, at edge device k , the total local execution time τ lock for task Tk is given by:

τ lock =

lk , if αk = 1, and xmk = 0,

lk + ϕk , if αk = 0, and xmk = 0,

0, if αk = 0, and xmk = 1.

(3.12)

In the above equation, ϕk is the average waiting time until the resources become available for

executing task Tk. If the edge device has enough resource such as energy, CPU cycles, and mem-

ory, i.e., edge device’s status parameter αk = 1, the edge device k ∈ K can compute its task Tk

Page 45: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 32

locally. In such a situation, the edge device k does not need to offload its task and corresponding

data to a nearest MEC server attached to its home BS. However, the edge device has to experience

local computational delay τ lock . In addition, if the edge device k cannot hold on its task until the

resources become available, i.e., αk = 0 for local execution of the task Tk( lk > τk, zk > Pk,

Ek > Ek), the edge device k needs to send its task Tk and corresponding data through offloading

to the nearest MEC server m attached to its home BS.

3.3.2.2 Computation Model at MEC Server

If the edge device k does not have enough resource to compute its task Tk, it offloads the Tk to

nearby MEC server m ∈ M attached to its home BS. At the MEC server, to minimize delay,

we consider that the offloaded task Tk have to be computed at the first MEC server met which

has required computation resource. Furthermore, we consider that each MEC server m ∈ M has

available computational resource Pm. For allocating Pm, we use a computation decision variable

yk→mk ∈ 0, 1, where yk→mk specifies whether or not offloaded task Tk from edge device k is

computed at MEC server m. The decision variable yk→mk is given by:

yk→mk =

1, whem BS m computes offloaded task Tk by edge device k

0, otherwise.(3.13)

Furthermore, at BS m, the computation resource allocation pkm can be expressed as follows:

pkm = Pmzk∑

g∈Kmzg, ∀k ∈ Km, m ∈M. (3.14)

In addition, the computation resource allocation has to satisfy the following resource constraint:

∑k∈Km

xmk pkmyk→mk ≤ Pm, ∀m ∈M. (3.15)

We consider computing task Tk at MEC server m takes time. Therefore, we calculate the

Page 46: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 33

execution latency lkm as follows:

lkm =s(dk)zkpkm

. (3.16)

Therefore, based on the above equation, the total execution time at MEC server m of offloaded

task Tk from the edge device k can be mathematically expressed as follows:

τ ekm = τk→mk + lkm, ∀k ∈ Km, m ∈M. (3.17)

When zk > pkm or τ ekm > τk, we consider that MEC server m does not have required resource

to execute offloaded task Tk from edge device k and satisfy computation deadline τk. Then, in

collaboration space, MEC server m has to check in its RAT and identify another nearest MEC

servers n that has required resource to compute Tk and meet computation deadline. MEC server

m sends the offloaded task Tk to the nearest MEC server n, where at MEC server n, the execution

latency lkn of Tk is calculated using (3.16). The total execution time of the offloaded task Tk from

edge device k at MEC server n can be expressed as follows:

τ ekmn = τk→mk + τm→nk + lkn, ∀k ∈ Km, and m,n ∈M. (3.18)

In the worst-case scenario, when in the whole collaboration space there is no resource for comput-

ing offloaded task Tk, MEC serverm sends Tk to the DC via the backhaul link. The total execution

time of the offloaded task Tk from edge devices k via BS m to DC can be expressed as follows:

τ ekmDC = τk→mk + τm→DCk + lkDC , ∀k ∈ Km, and m ∈M, (3.19)

Here, lkDC can be calculated using equation (3.16). Furthermore, we consider both offloading and

computation latency for calculating total latency τ offk of offloaded task Tk, where τ off

k is given by:

τ offk = yk→mk τ ekm +

∑n∈M

ym→nk τ ekmn + ym→DCk τ ekmDC ,∀k ∈ Km, and m ∈M. (3.20)

To join/coordinate communication and computation, we ensure that each task Tk from each

Page 47: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 34

edge device k is computed at one location, i.e., no duplication in resource utilization, we formulate

the following constraints.

(1− xmk ) + xmk (yk→mk +∑n∈M

ym→nk + ym→DCk ) = 1, (3.21)

maxyk→mk , ym→nk , ym→DCk , ∀n ≤ xmk , ∀k ∈ Km. (3.22)

In other words, in the above equation, ∀m,n ∈ M, each task Tk is either executed at edge device

k, at MEC server, or at the DC.

3.3.3 Caching Model

In our caching model, edge device offloads both task Tk and corresponding data dk. Then, MEC

server m caches data dk, where the cached data dk can be served to edge devices on demands λdkm

of data dk that reach at MEC serverm. In addition, we consider that the cached input data dk of the

offloaded task Tk can be served after or before computation. Since, cache capacity of MEC sever

is limited, when the cache storage is full, MEC server needs to remove in cache storage the least

frequently reused data to give space new offloaded data that requires to be stored in cache storage.

Here, when the cache storage is full, we use LFU (Least Frequently Used) as cache replacement

algorithm [62] [63], where each MEC server replaces the least frequently reused data. In other

words, LFU depends on the number of requests λdkm for cached content dk that reaches each server

m and satisfied.

To use effectively the cache storage, we define caching decision variable wkm ∈ 0, 1 that

indicates whether or not to cache the data dk at MEC server m from edge devicek, where wkm is

given by:

wkm =

1, if offloaded data dk from edge device k is cached at MEC server m ∈M,

0, otherwise.(3.23)

We consider that each MEC server m uses wkm to choose the data dk to cache. In other words,

Page 48: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 35

wkm is cache decision policy, while LFU is used to complement cache decision policy as a cache

replacement policy. Furthermore, to ensure that our caching decision does not violate cache ca-

pacity, we formulate the following constraint: ∑k∈Km

yk→mk +∑

n6=m∈M

∑k∈Kn

yn→mk

wkms(dk) ≤ Cm,∀m ∈M, (3.24)

where Cm is cache capacity of MEC server m. However, in the absence of the required resource

at MEC server m to cache data dk, RAT is used to find another nearby MEC server n that has

enough resource to cache dk in the collaboration space. Otherwise, MEC server m sends dk to the

DC.

3.3.4 Control Model

To join communication, computation, and caching (3C) approaches described in above subsec-

tions, we introduce new distributed optimization control that aims to join 3C models. In other

words, our control model is based on distributed optimization control that will be described in

details in the next subsection.

Our distributed optimization control has two main goals. The first goal is to minimize back-

haul bandwidth consumption, i.e., to maximize the backhaul bandwidth saving. To achieve this,

we need to maximize the number of cache hits at MEC servers, which results in reducing back-

haul traffic, i.e., data traffic between MEC servers and DCs. Therefore, we use caching reward

Ψ(x,y,w) to define an amount of saved backhaul bandwidth through caching [38], where the

caching reward is expressed as follows:

Ψ(x,y,w) =∑m∈M

∑k∈Km

s(dk)λdkm x

mk (yk→mk wkm +

∑n∈M

ym→nk wkn), (3.25)

where λdkm is used to denote the arrival rate of the requests for data dk at MEC server m.

The second goal of our distributed optimization control is to minimize total delay, where the

total delay is defined as total amount of time required to completely execute task Tk of edge device

k, either locally at the edge device or at MEC servers. If edge device k execute its task locally,

Page 49: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 36

it requires the computational delay τ lock . In case of offloading, if edge devices k offloads its task

Tk to the nearby MEC server, the total offloading delay τ offk is required. In other words, τ off

k has

both offloading and computation delays. Furthermore, to minimize both τ lock and τ off

k , we define

the total delay Θ(x,y) as follows:

Θ(x,y) =∑m∈M

∑k∈Km

(1− xmk )τ lock + xmk τ

offk . (3.26)

where each task Tk is executed edge device k, or at any MEC server, or at DC.

3.4 Problem Formulation and Solution

Here, we need to join the above-described communication, computation, caching, and control

models. Therefore, we propose the following optimization problem that minimizes both band-

width consumption and network latency subject to communication, computation, and caching

resource constraints:

minimizex,y,w

Θ(x,y)− ηΨ(x,y,w) (3.27)

subject to:∑k∈Km

xmk amk ≤ 1, ∀m ∈M, (3.27a)

∑k∈Km

xmk pkmyk→mk ≤ Pm, ∀m ∈M, (3.27b)

xmk (∑k∈Km

yk→m +∑

n6=m∈M

∑k∈Kn

yn→mk )wkms(dk) ≤ Cm, (3.27c)

(1− xmk ) + xmk (yk→mk +∑n∈M

ym→nk + ym→DCk ) = 1, (3.27d)

maxyk→mk , ym→nk , ym→DCk , ∀n ≤ xmk . (3.27e)

Here, we use the weight parameter η > 0.

Constraints: At each BS m, the constraint in (3.27a) ensures that the spectrum allocated to

Page 50: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 37

edge devices should be less than or equal to the total available spectrum. In addition, at each MEC

server m, constraints in (3.27b) and (3.27c) ensure that the caching and computational resources

assigned to edge devices should be less than or equal to the available computational and caching

resources. Furthermore, the constraints in (3.27d) and (3.27e) ensure that each offloaded task Tk

has to be executed at only one location, either at edge device, at MEC server, or at DC, i.e., no

duplication in resource utilization.

We use the following objective function to represent (3.27):

B(x,y,w) ..= Θ(x,y)− ηΨ(x,y,w). (3.28)

where both (3.27 and (3.28) have the same structure and constraints. Furthermore, the objective

function in (3.28) in non-convex and difficult to handle. Hence, to make (3.28) convex and easy to

solve, we introduce an upper-bound problem, which is convex problem, of the formulated objec-

tive function, and we use Block Successive Upper-bound Minimization method (BSUM) described

in 3.4.1 for solving it.

3.4.1 Overview of BSUM Method

The Block Successive Upper-bound Minimization method (BSUM) is one of the distributed algo-

rithms that enable to decompose a problem into subproblems for parallel computing. We choose

BSUM over other distributed algorithms because BSUM allows to split our objective function into

subproblems, where each subproblem can be separately addressed. The BSUM has more advan-

tage over other distributed algorithm in solution speed [15, 65]. The BSUM in its original form

can be expressed as follows:

minimizex

g(x1,x2, . . . ,xJ), s.t. xj ∈ Zj , ∀j ∈ J , j = 1, . . . , J. (3.29)

Furthermore, J is used as a set of indexes, in which Z := Z1 × Z2 × · · · ZJ , and g(.) is a

continuous function. In addition, for j = 1, . . . , J , we utilize xj as a block of variables, where

Zj is closed convex set. Furthermore, at each iteration t, Block Coordinate Descent (BCD) can

be applied for solving the above optimization problem. Through solving the following problem,

Page 51: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 38

every single block of variables can be optimized:

xtj ∈ argminxj∈Zj

g(xj , xt−1−j ), (3.30)

such that xt−1−j := (xt−11 , . . . , xt−1j−1, xt−1j+1, . . . , x

t−1j ), xtk = xt−1k for j 6= k.

When (3.29) is a non-convex problem , (3.29) and (3.30) become complicated for being solved,

and by applying BCD does not always guarantee an optimal solution. Therefore, for a given

feasible point y ∈ Z , to address this issue, BSUM can be utilized through introducing the proximal

upper-bound function h(xj ,y) of the formulated problem g(xj ,y−j). We use proximal upper-

bound function which is convex because it is easy to salve by comparing it with other approach

such as linear upper-bound, Jensen’s upper-bound, and quadratic upper-bound described in [65].

Furthermore, the formulated proximal upper-bound problem h(xj ,y) have to meet the below

assumptions:

Assumption 1 The following assumptions must be satisfied:

(i) h(xj ,y) = g(y),

(ii) h(xj ,y) > g(xj ,y−j),

(iii) h′(xj ,y;qj)|xj=yj= g′(y;q), yj + qj ∈ Zj .

Both assumptions in 1(i) and 1(ii) ensure that the proximal upper-bound problem h has to

be a global upper-bound problem of g. In addition, 1(iii) ensures that h(xj ,y) takes the steps

proportional to the negative gradient of (g(xj ,y−j)) in the direction q. To satisfy 1(iii), the first-

order derivative of h(xj ,y) must exist. Therefore, to make proximal upper-bound function, we

use quadratic penalization which is added to the objective function. The proximal upper-bound

function can be mathematically expressed as follows:

h(xj,y) = g(xj ,y−j) +%

2(xj − yj)

2, (3.31)

where % takes a small positive value and is used as a penalty parameter.

Page 52: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 39

At each iteration t, to solve 3.31, the BSUM uses the following updates:

xtj ∈ argmin

xj∈Zj

h(xj ,xt−1j ), ∀j ∈ J ,

xtk = xt−1k , ∀k /∈ J .(3.32)

In BSUM, to select each coordinate j ∈ J in the above update (3.32), the following selection

rules described in [65] and summarized below can be utilized:

• Cyclic rule: In cyclic rule, at each iteration t, coordinates are chosen in a cyclic arrangement

such as 1, 2, 3, . . . , J, 1, 2, 3, . . ..

• Gauss-Southwell rule: At each iteration t, the Gauss-Southwell rule chooses J which has

a single index j∗ ∈ J that satisfy the following constraint: j∗ ∈ j | ‖(xtj − xt−1j )‖ ≥

q maxk‖(xtk − xt−1k )‖ for j, k ∈ J . Here, we use q ∈ [0, 1] as a constant.

• Randomized rule: At iteration t, the Randomized rule uses a probability vector pt =

(pt1 . . . ptJ) and constant qmin ∈ [0, 1] in which the following constraints must be satisfied:∑

j∈J ptj = 1 and ptj ≥ qmin. Then, Randomized selection rule selects a random index

j∗ ∈ J through computing Pr(j ∈ J | xt−1,xt−2, . . . ,x0 = ptj).

Algorithm 2 : Original form of BSUM algorithm [65]1: Input: Vector x;2: Output: Vector x∗;3: Initialization: ε > 0, t = 0;4: Find initial feasible point x0 ∈ Z;5: Repeat;6: Select index set J ;7: Compute xtj ∈ argmin h(xj ,x

t−1−j ), ∀j ∈ J ;

8: Set xtk = xt−1k , ∀k /∈ J ;9: t = t+ 1;

10: Until ‖h(t)j −h

(t+1)j

h(t)j

‖ ≤ ε;

11: Then, use x∗ = x(t+1)j as solution.

We show BSUM algorithm in its original form in Algorithm 2, where BSUM is a generalized

method of BCD. The BSUM optimizes upper-bound problem h(xj ,y) of the objective function

Page 53: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 40

g(xj ,y−j) in block by block fashion. In other words, BSUM is an appropriate method for handling

both separable non-smooth or smooth convex problems, where constraints have a linear coupling.

For such problems, the BSUM can be iteratively utilized to update block of variables. In addition,

to minimize the proximal upper-bound function h(xj ,y), the BSUM uses the steps proportional

to the negative of the gradient of g(xj ,y−j) until h(xj ,y) reaches or converges to a stationary

solution called coordinate-wise minimum. In other words, a stationary solution of h(xj ,y) must

be also a coordinate-wise minimum solution h(xj ,y) if and only if the block of variables arrives

to the minimum point x∗ = x(t+1)j . In other words, at minimum point x∗ = x

(t+1)j , BSUM

algorithm cannot get a better minimum direction [65–67].

Remark 1 (Convergence of BSUM) BSUM takes O (log(1/ε)) for converging to an ε-optimal

solution, which is coordinate-wise minimum. Therefore, BSUM has sub-linear convergence.(Proof

[65] and [68],)

With respect to xj , xεj ∈ Zj , an ε-optimal solution is reached when xεj ∈ xj |xj ∈ Zj and

h(xj ,xt,yt) − h(x∗j ,x

t,yt) ≤ ε. Then, h(x∗j ,xt,yt) is considered as an optimal value for

h(xj ,y).

3.4.2 Proposed Solution: Distributed Optimization Control Algorithm

In (3.28), the formulated problem is non-convex and difficult to salve because of the decision

variables that are utilized either at the edge device, MEC server, or DC, i.e., in different locations.

Therefore, to tackle our problem, we need a distributed algorithm that allows to decompose the

formulated problem into small subproblems. To achieve this, we consider BSUM as an appropriate

algorithm for handling our optimization problem in (3.28) in distributive way through separately

solving each subproblem. Therefore, to use BSUM in our joint 4C framework, we consider X ,

x :∑

m∈M∑

k∈Kmxmk = 1, xmk ∈ [0, 1], Y , y :

∑m∈M

∑k∈Km

yk→mk + ym→nk +

ym→DCk = 1, yk→mk , ym→nk , ym→DCk ∈ [0, 1], and W , w :∑

m∈M∑

k∈Kmwkm + wkn +

wkDC = 1, wkm, wkn, w

kDC ∈ [0, 1] to be the feasible sets of variables x, y, and w, respectively.

Furthermore, to solve the formulated problem in (3.28), the following two steps are required:

• Step 1: We propose a proximal convex problem through adding quadratic penalization to

Page 54: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 41

the formulated problem in (3.28), and this makes proximal convex problem an upper bound

of (3.28).

• Step 2: Rather than minimizing the formulated problem in (3.28) which is difficult to solve,

we use BSUM to minimize the proximal upper-bound problem of (3.28). The proximal

upper-bound problem should guarantee that it uses the steps proportional to the negative

gradient.

To use BSUM, in the first step, we propose the below proximal upper-bound function Bj ,

where Bj is the proximal upper-bound of the formulated problem in (3.28) and convex. At each

iteration t, ∀j ∈ J , to ensure that Bj is convex, we add quadratic penalization to the formulate

problem in (3.28) as follows:

Bj(xj ,x(t),y(t),w(t)) ..= B(xj , x, y, w) +%j2‖(xj − x)‖2. (3.33)

The proximal upper-bound function (3.33) of (3.28) can be used for other vectors of variables

yj and wj , respectively. Here, we use %t > 0 as positive penalty parameter. Furthermore, the

quadratic term %j2 ‖(xj − x)‖2 added to (3.28) makes proximal upper-bound function in (3.33)

convex. At each iteration t, with respect to xj , yj , and wj , the problem in (3.33) has x, y, and

w as a vector of minimizers, which are considered as a solution of the previous step (t − 1).

Moreover, for each iteration t + 1, we consider the solution of the problem in (3.33) is improved

through solving the problems given below:

x(t+1)j ∈ min

xj∈XBj(xj ,x(t),y(t),w(t)), (3.34)

y(t+1)j ∈ min

yj∈YBj(yj ,y(t),x(t+1),w(t)), (3.35)

w(t+1)j ∈ min

wj∈WBj(wj ,w

(t),x(t+1),y(t+1)). (3.36)

Page 55: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 42

To solve (3.34), (3.35), and (3.36), we propose Algorithm 3 as distributed optimization control

algorithm for joint 4C framework. The proposed algorithm is an application of the original BSUM

algorithm presented in Algorithm 2. In addition, we relax the variables xj , yj , and wj through

setting their values in the closed interval [0, 1]. Then, after solving (3.34), (3.35), and (3.36), we

apply threshold rounding technique [69] in the proposed Algorithm 3 to ensure that the relaxed

xj , yj , and wj have to be binary decision variables. In our rounding technique, we use a positive

rounding threshold θ. As an illustration example for this technique, let us consider xm∗k ∈ x(t+1)j ,

xm∗k ≥ θ and θ ∈ (0, 1), where binary decision variable xm∗k can be expressed as follows:

xm∗k =

1, if xm∗k ≥ θ,

0, otherwise,(3.37)

where (3.37) can be used for other decision variables yj and wj , respectively. However, as il-

lustrated in [70], the solution obtained using rounding technique may violates 3C resource con-

straints. Furthermore, as presented in [70], to address this challenge related to rounding technique,

we need to solve (3.33) in the form of Bj + ξ∆. In addition, the constraints (3.27a), (3.27b), and

(3.27c) are updated as follows:

∑k∈Km

xmk amk ≤ 1 + ∆a, ∀m ∈M, (3.38)

∑k∈Km

xmk pkmyk→mk ≤ Pm + ∆p, ∀m ∈M, (3.39)

xmk (∑k∈Km

yk→m +∑

n6=m∈M

∑k∈Kn

yn→mk )wkms(dk) ≤ Cm + ∆m. (3.40)

We use ∆a to denote the communication resources constraint violation, ∆p to denote the compu-

tational resources constraint violation, and ∆m to denote caching resources constraint violation.

Page 56: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 43

Therefore, the maximum violation of resource constraints can be written as ∆ = ∆a + ∆p + ∆m,

where ξ is use as the weight parameter associated to ∆. Therefore, ∆a, ∆p, and ∆m are given by:

∆a = max0,∑k∈Km

xmk amk − 1, ∀m ∈M, (3.41)

∆p = max0,∑k∈Km

xmk pkmyk→mk − Pm, ∀m ∈M, (3.42)

∆m = max0, xmk (∑k∈Km

yk→m +∑

n6=m∈M

∑k∈Kn

yn→mk )wkms(dk)− Cm. (3.43)

Furthermore, we assume that the solution is obtained when (∆a = 0, ∆p = 0, and ∆m = 0), i.e.,

when there are zero violations of 3C resources constraints.

Based on both problems Bj and Bj + ξ∆ (rounded problem), we can calculate the integrality

gap, which measures the quality of rounding technique using ratio between the solutions of Bj and

Bj + ξ∆. Using integrality gap definition and proof provided in [69], we can define integrality

gap as follows:

Definition 1 (Integrality gap of rounding) : For the formulated problem Bj (3.33) and rounded

problem Bj + ξ∆, we can mathematically define the integrality gap as:

β = minimizex,y,w

BjBj + ξ∆

, (3.44)

where in Bj , the relaxed variables xj , yj , and wj are used for obtaining solution of Bj . On the

other hand, in Bj + ξ∆, the rounded variables xj , yj , and wj are used to obtain the solution of

Bj + ξ∆. The rounding is well performed if β ≤ 1 [69]. At β = 1, there is zero violation of

resource constraints (∆a = 0, ∆p = 0, and ∆m = 0).

For the join 4C framework, we use distributed optimization control algorithm (Algorithm 3).

Page 57: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 44

We consider that each edge device k ∈ K uses offloading decision xmk and when xmk = 1, the

edge device offloads its task Tk and corresponding data to the MEC server attached to its home

BS. On receiving Tk, the BS uses its RAT for checking available resources. Algorithm 3 begins

with initializing ε to a small positive number and t = 0. Here, ε ensures the convergence to ε-

optimal solution [65]. The proposed Algorithm 3 calculates the initial feasible variables (x(0),

y(0), w(0)) and does iterations through selecting an appropriate index set at each iteration t. Fur-

thermore, at each iteration t+ 1, the Algorithm 3 updates solution through solving the formulated

problems (3.34), (3.35), and (3.36) untilB(t)j −B

(t+1)j

B(t)j

≤ ε (it converges to an ε-optimal solution).

Furthermore, the Algorithm 3 applies rounding technique (3.37) and solves Bj + ξ∆ for getting

binary solution of x(t+1)j , y(t+1)

j , and w(t+1)j and obtaining appropriate resource allocation c, p,

and R. In addition, Algorithm 3 also ensures that Bj + ξ∆ reaches an ε-optimal solution. Based

on the solution of Bj + ξ∆, Algorithm 3 computes the value of β. The Algorithm 3 achieves best

rounding if β ≤ 1. Then, Algorithm 3 considers x∗ = x(t+1)j , y∗ = y

(t+1)j , and w∗ = w

(t+1)j as

coordinate-wise minimum, i.e., stationary solution. Finally, Algorithm 3 updates RAT associated

to MEC server and dispatches the RAT update in the collaboration space.

The novelty of our algorithm for joint 4C in collaborative MEC (Algorithm Algorithm 3)

over BSUM algorithm in its original form (Algorithm 2) lies in the implementation. The original

BSUM Algorithm is based on a distributed control model, while our algorithm for joint 4C has

both distributed and hierarchical control models described in [71]. In our proposal, for hierarchical

control model, edge devices determine first the decision variables x, while each MEC server m

controls offloaded tasks from the connected edge devices to its network, where it solves (3.34),

(3.35), and (3.36).

In our distributed control model, to update the RAT information, MEC servers require to ex-

change each others resource utilization information. This helps each MEC server to handle the

formulated optimization problem while maintaining the 3C resource assignment in collaboration

space within a fixed range of available communication, computation, and caching resources. In

our approach, in the collaboration space, we consider that there is no centralized controller for

dispatching demands to MEC servers. In other words, each MEC server has to run our algorithm

for 4C (Algorithm 2). Such kind of approach of distributed control was analyzed in [72] as a

Page 58: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 45

Algorithm 3 : Distributed control algorithm for joint 4C in collaborative MEC1: Input: T, Bm, Pm, and Cm: A vector of tasks and available communication, computational,

and caching resources;2: Output: x∗, y∗, w∗, R: Communication resources allocation, p: Computation allocation,

and c : Cache allocation;3: Each edge device k ∈ K decides on decision variable xmk for offloading;4: when xmk = 1, edge device k ∈ K offloads its task Tk to home BS m ∈M;5: BS m ∈M checks its RAT for each task Tk received;6: Initialization ε > 0, t = 0;7: Get initial feasible points for x(0), y(0), w(0);8: repeat9: Select index set J ;

10: Find x(t+1)j ∈ min

xj∈XBj(xj ,x(t),y(t),w(t));

11: Set xt+1k = xtk, ∀k /∈ J ;

12: Return to step 4 and find y(t+1)j , w(t+1)

j by solving (3.35) and (3.36);13: t = t+ 1;

14: until ‖B(t)j −B

(t+1)j

B(t)j

‖ ≤ ε;

15: Find a binary solution of x(t+1)j , y(t+1)

j , w(t+1)j and resource allocation c, p, and R by using

rounding technique (3.37)16: Solve Bj + ξ∆ and calculate β. If β ≤ 1, the solution x∗ = x

(t+1)j , y∗ = y

(t+1)j , and

w∗ = w(t+1)j is obtained ;

17: Update RAT information, and dispatches RAT update in the collaboration space.

dynamic feedback control approach. Here, in our proposal, the RAT update serves as feedback of

state (x(t), y(t), w(t)) for each MEC server at iteration t. This helps at next iteration t+ 1 to find

a new state (x(t+1), y(t+1), w(t+1)). In the end, we consider (x∗j , y∗j , w

∗j ) as a stability point or

a network equilibrium point which satisfies a coordinate-wise minimum.

3.5 Simulation Results

3.5.1 Simulation Setting

Based on simulation results presented in Chapter 2, where we formed 1000 collaboration spaces

by using BSs dataset [75]. In 1000 collaboration spaces, one collaboration space of 13 BSs was

selected randomly. Then, we associate one MEC server to each BS. In addition, at each BS, we

start with an initial edge devices K = 100 and increase K value exponentially up to K = 3200.

Page 59: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 46

Furthermore, at each time slot t, each edge device k has one task Tk.

For the communication model, we set the transmission power equals to ρk = 27.0 dBm [37],

path loss factor equals to 4, and the channel bandwidth in the range from Bm = 25 MHz to

Bm = 32 MHz [75]. Furthermore, in collaboration space, we set bandwidth of X2 between BSs

in the range from Γnm = 20 MHz to Γnm = 25 MHz. The bandwidth of wired link between the

BS and DC is set to be in the range from ΩDCm = 50 to ΩDC

m = 120 Mbps. Furthermore, for both

computation and caching resources, we use computation resources within the range from 2 GHz

to 2.5 GHz [77] and cache storage within the range from 100 to 500 TB for MEC server m. To

retrieve contents, we set the number of requests for contents to be in range from λdkm = 578 to

λdkm = 3200. In addition, the demand for contents and popularity of the contents are based on Zipf

distributions [79, 80].

In task Tk from edge device k, we use synthetic input data dk, where the size of generated data

s(dk) is in the range from 2 to 7 GB. In addition, the task computation deadline τk is set to be in the

range from τk = 0.02 second to τk = 12 seconds. Furthermore, the workload zk of task Tk from

edge device k is set within the range from zk = 452.5 cycles/bit to zk = 737.5 cycles/bit [77].

Since, we consider that edge device k can locally execute its task Tk if it has enough resource, we

set computation resource at edge device within the range from 0.5 GHz to 1.0 GHz [78].

3.5.1.1 Performance Metrics

Throughput: One of the key performance of effective resource utilization is throughput. Therefore,

to evaluate our proposed algorithms in terms of both network and computation throughputs, we

define network throughput as the number of data units the network can send and received for

a given period of time [81, 82]. Therefore, we measure network throughput in terms of Mbps.

On the other hand, we define computation throughput as the number of task units that can be

computed within a given period of time. Therefore, we measure the computation throughput in

terms of million instructions per second (MIPS).

Delay: In addition to throughput, we consider delay as another key performance of effective

resource utilization. Here, to measure delay, we consider in collaboration space, each offloaded

task Tk from each edge device k finishes its journey to MEC server which has required resources to

Page 60: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 47

execute edge device’s demand. On receiving offloaded task Tk, the MEC server computes Tk, then

caches computation output, and sends back the computation output to the edge device. Through

this, we use total delay as an end-to-end delay, which is defined as a period of time between of-

floading task Tk from the edge device k and getting back the corresponding computation output at

the edge device k. However, end-to-end delay does not permit us to show separately computation

delay and offloading delay. Therefore, we added executing delay and transmission delay defined

in Section 3.3 as new metrics for delay.

Bandwidth-saving and Cache Hit Ratio: The increase in cache hits contributes to the increase

in bandwidth saving defined in Section 3.25 because cache hits help in reducing the data traffic

between edge devices and DCs. Here, to evaluate our proposal in terms of both cache hits and

misses in collaboration space, we consider that the cache hit hdkm ∈ 0, 1 happens if the demand

for content dk can be satisfied from the cache storage available at any MEC serverm. Otherwise, a

cache miss (1− hdkm ) happens if the demand for content dk cannot be satisfied by any MEC server

m. Furthermore, we consider the probability of a cache hit in collaboration space for content dk

can be mathematically expressed as follows:

Pdk =

∑k∈K

∑m∈M hdkm∑

k∈K∑

m∈M(hdkm + (1− hdkm )). (3.45)

Here, we use∑

k∈K∑

m∈M hdkm to denote total number of cache hits, while∑

k∈K∑

m∈M(hdkm +

(1− hdkm )) is used to denote the sum of cache hits and cache misses.

3.5.2 Numerical Results

Through using a collaboration space of 13 BSs, where each BS is associated with one MEC

server, Fig. 3.2 shows the solution of linear combination of both delay and bandwidth saving as

cost in one optimization problem in (3.33). Here, to solve the proximal upper-bound problem in

(3.33), we use our distributed optimization control algorithm available in (Algorithm 3). In ad-

dition, without using the rounding technique, we compare our proposal with Douglas-Rachford

Splitting (D-R-S). The Douglas-Rachford splitting method is new distributed optimization ap-

proach proposed in [83] that enable to decompose problem into subproblems and separately han-

Page 61: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 48

Figure 3.2: Optimal value of Bj (3.33) (without rounding).

Figure 3.3: Optimal value of Bj + ξ∆ (after rounding).

dle each subproblem. To introduce D-R-S, let us consider two objective functions f and g, where

D-R-S method consists in minimizing f(x)+g(x) through using the following procedures: At the

first iteration t = 0, D-R-S begins with an initial feasible y(0), then updates both x and y so that

x(t) = proxf (y(t−1)), while y(t) = y(t−1) + proxg(2x(t) − y(t−1)) − x(t−1). Finally, the D-R-S

considers proxf and proxg as two proximal problem of f and g [83].

The simulation results in Fig. 3.2 shows the convergence of Bj without using rounding tech-

nique. To solve (3.33), we apply both Douglas-Rachford Splitting method [83] and our distributed

control algorithm. In our proposal, to choose indexes, we apply different coordinate selection

rules defined in [65], i.e., Cyclic, Gauss-Southwell, and Randomized. In addition, in the proximal

Page 62: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 49

Figure 3.4: CDF of computation throughput.

Figure 3.5: Transmission delay.

upper-bound problem in (3.33), we set the positive penalty parameter %j for quadratic terms in

the range from %j = 0.2 to %j = 100. Furthermore, the Fig. 3.2 shows that our proposed algo-

rithm (Algorithm 3) and Douglas-Rachford splitting algorithm have almost the same performance.

Moreover, the formulated problem in (3.33) converges to a stationary point, which is coordinate-

wise minimum point, i.e., solution of (3.33). Therefore, the coordinate-wise minimum point is

equilibrium and stability point of Bj (3.33).

Therefore, we use the rounding technique on the solution of (3.33) presented Fig. 3.2 and

solve Bj + ξ∆. To use rounding technique, we set the positive rounding threshold θ = 7, while

the weight parameter ξ of ∆ is set to be in range from ξ = 0.02 to ξ = 2.0. Fig. 3.3 shows

Page 63: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 50

Figure 3.6: Computation delay.

Figure 3.7: Normalized cache hits in collaboration space.

the simulation results after applying rounding technique, where rounding technique guarantees

that the relaxed vectors xj , yj , and wj to be binary variables. Then, we compute β to ensure

that the rounding technique is well performed and has zero violation of 3C resource constraints

through solving Bj + ξ∆. Furthermore, by comparing Fig. 3.2 and Fig. 3.3, the differences

in both Figs reside in the step sizes required to reach the minimum point and sizes of the both

problems Bj and Bj + ξ∆. The results in these figures show that both problems Bj and Bj + ξ∆

converge to almost the same minimum point which is stability point. Therefore, using or not using

rounding approach, (3.33) converges to a coordinate-wise minimum point with zero violation of

3C resources constraints, i.e., β = 1.

We evaluate our proposal in terms of both network and computation throughput. For net-

Page 64: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 51

work throughput, our approach demonstrates that the network throughput increases up to 18

Mbps, where using different coordinate selection rules, namely Cyclic, Randomized, and Gauss-

Southwell in our proposal and Douglas-Rachford splitting algorithm have experienced the same

performance. On the other hand, for computation throughput, the simulation results in Fig. 3.4

demonstrates the cumulative distribution function (CDF), where Cyclic selection rule in our pro-

posal and Douglas-Rachford splitting (D-R-S) algorithm use high computation resources of MEC

server, and the computation throughput can reach 0.91×106 MIPS at each MEC server. However,

in our proposal, the Randomized and Gauss-Southwellselection rules utilize less computation re-

sources of MEC server, where the computation throughput can reach 0.48 × 106 MIPS at each

MEC server. The Gauss-Southwell selection rule has good performance when compared to other

coordinate selection rules due to the way it chooses the indexes to use. Therefore, at each itera-

tion, Gauss-Southwell selection rule selects an index in which efficiently utilizes the computation

resource rather than selecting indexes cyclically or randomly.

In addition to throughput, we evaluate our proposal in terms of end-to-end delay, where end-

to-end delay, i.e., total delay covers the interval of time from offloading task Tk at edge device k

and getting back the corresponding computation output at edge device k. Here we divided end-

to-end delay into transmission delay and computation delay. The simulation results in terms of

transmission delay are shown in Fig. 3.5. In this figure, the solid blue lines show the median,

while the dashed black lines show the mean. For the notation in this figure, we use G-S for Gauss-

Southwell, Ran for Randomized, Cyc for Cyclic, and D-R-S for Douglas-Rachford splitting. Here,

the simulation results demonstrate that G-S has 0.097 seconds, while Cyclic has 0.093 seconds as

the mean of the transmission delay. Furthermore, Fig. 3.6 demonstrate the mean of the computa-

tion delay in which varies from 0.098 (Cyc) to 0.153 (D-R-S) seconds. Here, The total delay is the

sum of both computation and transmission delays. Therefore, the total delay meets the required

computation deadline. However, Douglas-Rachford splitting and G-S are characterized by high

delay over other coordinate selection rules due to the way they select index (for G-S) and splitting

(for D-R-S). In addition, Cyclic and D-R-S uses more time and computation resources. Finally, we

can conclude that D-R-S has a higher computational delay than our proposal (BSUM based) with

different coordinate selection rules. Moreover, the only difference between other techniques and

Page 65: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 52

Figure 3.8: Generated content ranking using Zipf distribution [113].

Figure 3.9: Bandwidth saving due to caching.

D-R-S in Figs. 3.2, 3.3, 3.4, 3.5, and 3.6 is the computation delay and utilization of computational

resources. Thus, we consider that computation resources are limited at MEC server, where each

offloaded task Tk from edge device k has to be fully completed within a certain interval of time

or computation deadline, based on simulation results, we can assume that our proposed algorithm

for join 4C has better performance over the D-R-S approach.

For bandwidth-saving and cache hit ratio, the Fig. 3.7 demonstrates the normalized cache

hits. Here, we calculate cache hit ratio Pdk using (3.45). From the simulation results presented in

this figure show that the increase of cache hit ratio, where it goes with the increase of the value

of Zipf exponent parameter a. Furthermore, for a = 2.0, many contents become popular due to

Page 66: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 53

the high number of demands. This result in a cache hit ratio of 15.2% of the total number of

demands λdkm for content dk. The choice of a = 0.5 to a = 2.0 comes from the results presented

in Fig. 3.8, where the remarkable difference in convergence is observed when Zipf exponent

parameter a is within a range of a = 0.5 to a = 2.0. However, when the requested contents are

not available/cached in collaboration space, we consider that the MEC server sends the requests

to the remote cloud. Here, we consider cache hits help in minimizing the number of demands

λdkm need to be sent to the DC using backhaul link. Therefore, by using the number of cache hits,

the size of cached contents s(dk), and the number of demands λdkm that reach MEC servers in

collaboration space, we can calculate the bandwidth-saving formulated in (3.25).

For bandwidth-saving, the simulation results in Fig. 3.9 show the CDF of bandwidth-saving

in terms of Gigabytes (GB) due to caching. At the starting points of this figure, bandwidth-

saving is nearly zero because our caching approach is based content prefetching, where MEC

server caches the contents first for being reused later on demands. Furthermore, as the number

of demands increases, the cache hits also increase. Therefore, when a = 2.0 and η = 1, the

maximum bandwidth-saving of 2.45 × 107 GB can be reached. In addition, we consider the

increase in demands for contents goes with the increase of communication, computational, and

caching resource utilization, which cause also total delay to increase. Therefore, the simulation

results of our numerical analysis demonstrate that the cache resources utilization can reach 3×1010

GBs as the number of demands increase up to a = 2.0 and η = 1. In other words, the increase in

utilization of cache storage has a positive impact of increasing cache hits ratio.

3.6 Summary

In this chapter, we presented our joint communication, Communication, Computation, Caching,

and Control (4C) framework for collaborative MEC. In the proposed framework, we introduced

collaboration spaces for MEC servers, where MEC servers of the same collaboration space cooper-

ate to satisfy edge devices’ demands for communication, computation, and caching resources. We

formulated the joint 4C framework as an optimization problem that aims to minimize both band-

width consumption and network latency, subject to communication, computation, and caching

resource constraint. However, the formulated problem was non-convex and intractable. There-

Page 67: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 3. JOINT 4C IN COLLABORATIVE MULTI-ACCESS EDGE COMPUTING 54

fore, to simplify it, we proposed a proximal upper-bound problem of the formulated problem,

which is convex and easy to solve. Furthermore, to solve the proximal upper-bound problem,

we applied Block Successive Upper-bound Minimization approach, and proposed a distributed

optimization control algorithm for joint 4C. For the performance evaluation, we compared our

proposed algorithm with the Douglas-Rachford Splitting algorithm. Simulation results show that

our distributed optimization control algorithm has better performance over the Douglas-Rachford

splitting method in terms of computation time and computation resource utilization.

Page 68: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Chapter 4Deep Learning Based Caching for Self-Driving Cars in

Multi-access Edge Computing

4.1 Overview

The self-driving car has recently introduced to prevent human errors and bad decisions. Further-

more, in self-driving cars, the driver seat and steering wheel will disappear. In other words, in

self-driving cars, we will have more space which can be used for lounge or entertainment services.

Therefore, when self-driving cars start being used in public transport, new forms of infotainment

are required for passengers to make travel more enjoyable, where the choice for infotainment con-

tents depends on passengers’ features such as emotion, location, age, and gender. However, to

retrieve infotainment contents at a remote cloud or at the Data Center (DC) can perturb infotain-

ment services because of high communication delay between a self-driving car and DC. To over-

come these issues, caching infotainment contents in self-driving cars and nearby the self-driving

cars will be a suitable solution for minimizing end-to-end delay. In addition, the self-driving car

should determine itself the infotainment contents to play, where Artificial Intelligence (AI) should

help the self-driving car to analyze and understand the vehicle’s occupant features. To achieve this,

we need to address the following question: how to cache the infotainment contents in self-driving

cars and at roadside units in close proximity of the self-driving cars based on passengers’ features

and using both Multi-access Edge Computing (MEC) and joint 4C framework?

In this chapter, to answer the above question, we propose a new approach for content caching

in self-driving cars and in MEC servers attached to roadside units, where infotainment contents are

cached based on passenger’s features. To get passenger’s features, we use deep learning techniques

implemented in self-driving cars and MEC. We formulate an optimization problem that aims to

55

Page 69: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 56

minimize total delay for retrieving infotainment contents subject to communication, caching, and

computation (3C) resource constraints. Although, the formulated problem for deep learning based

caching is intractable and non-convex. Therefore, to make it easy to solve, we introduce an upper-

bound convex problem of the formulated problem. Then, to solve the upper-bound convex prob-

lem, we propose a distributed optimization control algorithm, which is based on Block Successive

Majorization-Minimization (BS-MM) approach. The performance evaluation demonstrates that

our proposal can reduce the total delay for retrieving infotainment contents. In addition, our pre-

diction for the contents to be cached at the edge in close proximity of the self-driving cars achieves

98.14% accuracy.

4.2 Background and Contributions

Autonomous cars look like the new cars produced in recent years, where some features for au-

tonomous driving already deployed such as adaptive cruise control and self-parking. In the future,

it expected that the next stage of autonomous driving will be “self-driving”. In the self-driving

mode, there will be no human driver intervention [84], i.e., the self-driving cars will drive them-

selves. Therefore, to achieve fully autonomous driving, self-driving cars need to have smart sen-

sors and analytics tools for collecting, processing, and analyzing data in real time related to the

driving environment, vehicle’s occupants, pedestrians, etc. In such a situation, big data and AI

will play irreplaceable roles [5] [85]. In addition, AI should be an empathetic companion of the

vehicle’s occupants to assist and provide personalized services to them such as infotainment ser-

vices. Therefore, in self-driving cars, AI should be able to analyze and understand the vehicle’s

occupant features [86].

In this dissertation, rather than using human-driven cars, we choose self-driving cars because

self-driving cars are already equipped with On-Board Units (OBUs) and Graphics Processing

Units (GPUs) that enable to deploy AI-based solutions easily. AI enables the self-driving cars

to observe, process, learn and navigate in real time [84]. Furthermore, it is estimated in the fu-

ture that there will be 22 billions of hours for extra media consummation when everyone utilizes

self-driving cars in the US [87]. Therefore, using OBUs, GPUs, and AI that enable Computation,

Communication, Caching, and Control (4C) in self-driving cars, vehicles’ occupants can enjoy

Page 70: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 57

their travels using infotainment services of the car such as playing games, watching media, and

utilizing social networks. Therefore, to make travel more enjoyable, self-driving cars need to have

recent emerging technologies for infotainment services such as Virtual, Augmented, Mixed Reality

and AI-based services [88]. However, getting infotainment contents from Data Centers (DCs) can

perturb infotainment services for self-driving cars because of limited backhaul capacities and high

communication delay between self-driving cars and DCs. An illustrative example: in a self-driving

car, watching video requires the source of the video, screen to display video, and sound system for

audio. However, when the source of video to play is not available in the self-driving car, it requires

the self-driving car to be connected to the Internet for retrieving video from DC. When DC is far,

video/content delivery services will be experienced with a high communication delay. To mini-

mize communication delay, content caching in self-driving cars should be considered a promising

solution for enhancing infotainment services. Furthermore, placing contents in close proximity

to self-driving cars rather than always using DC can also help in reducing communication delay.

Therefore, in this chapter, we assume that Multi-access Edge Computing (MEC) [22] is an appro-

priate new technology that can help the self-driving cars by caching contents in close proximity

to self-driving cars. Typically, in MEC, MEC servers are implemented near to the users, i.e., at

the edge of the wireless network for proving nearby cloud and IT application services [22] [64].

In this chapter, MEC servers are implemented in close proximity to self-driving cars at RoadSide

Units (RSUs).

4.2.1 Challenges for Caching in Self-Driving Cars

• The self-driving cars will have a new outlook, where the steering wheel and driver’s seat

will disappear. In other words, the self-driving car will have new interior space that can be

utilized for lounge services such as infotainment. Therefore, in self-driving cars, passengers

can enjoy their travels using infotainment service, where infotainment content providers can

do the transaction with the vehicle’s occupants by providing to them high-quality infotain-

ment contents [88]. To minimize end-to-end delay in getting infotainment contents, cache

infotainment contents in self-driving cars is needed. However, there is still a lack of litera-

ture that investigates how infotainment contents can be cached in self-driving cars based on

Page 71: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 58

the features of the vehicle’s occupants or passengers.

• To entertain passengers, the self-driving cars need to provide heterogeneous infotainment

contents with recent emerging technologies such as AI-based infotainment, virtual, aug-

mented, and mixed Reality [88]. However, getting these infotainment contents at DC can be

experienced a high end-to-end delay. Therefore, MEC servers need to support self-driving

cars through caching infotainment contents near to the self-driving cars.

• In human-driven cars, infotainment contents to play are chosen by drivers. However, in

the self-driving car, a human driver is replaced by AI. Therefore, AI should determine the

infotainment contents to play in the self-driving car, where the contents should entertain the

vehicle’s occupants. In addition to that, the chosen contents should not violate regulations

related to prohibited and restricted content access. Therefore, with AI, self-driving car needs

to learn its occupants’ features such as age, location, and gender, where infotainment content

need to be styled to the vehicle occupants’ features.

• The self-driving car services are delay sensitive. Consequently, to minimize backhaul band-

width consumption and communication delay between self-driving cars and DCs, a joint

communication, computation, caching, and control framework should be considered in the

MEC servers and self-driving cars for effective resource utilization. In addition, to have

fast handoff and less variation in transmission delay for retrieving infotainment contents,

the self-driving car should select MEC servers available in its route that will be used for

download infotainment contents need to be cached.

• As highlighted in [90], the vehicular network topologies are dynamic due to vehicle high

mobility. Therefore, using IEEE 802.11p, vehicular networks experience frequent network

handover and short-lived links. However, infotainment services need always Internet access

for retrieving infotainment contents, and IEEE 802.11p was not introduced for this proposal.

Therefore, to have always Internet access using IEEE 802.11p is a challenging issue. To

address this challenge, for infotainment services, other wireless technologies such as LTE,

5G should be considered as a suitable solution.

Page 72: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 59

Figure 4.1: The impact of users’ features in choosing contents. [89].

4.2.2 Contribution

To address the above-highlighted challenges, we propose deep learning based caching of infotain-

ment content in self-driving cars and in nearby MEC severs using both deep learning and joint 4C

framework. Our key contributions are summarized as follows:

• As illustrated in Fig. 4.1, by using YouTube demographics dataset [89], it shows that users

choice for contents depends on their features. Here, in Fig. 4.1, age, and gender are used

for choosing video. In this chapter, we introduce a Convolutional Neural Network (CNN)

approach for predicting users’ features. In our approach, the CNN model is trained and

tested at cloud/DC by using datasets. Then, we store the trained and tested CNN model

near to the self-driving cars at RSUs. Therefore, each self-driving car retrieves CNN model

at RSU/MEC server and utilize it for predicting vehicle occupants’ features using facial

images as input. Finally, the self-driving car uses CNN output for deciding on contents to

cache that are appropriate to the occupants’ features. Here, we choose CNN over collabora-

tive filtering (which consists in establishing the relationship between user’s preferences and

items’ features) because each passenger’s preference and features for infotainment contents

is not a priori known by the self-driving car.

• To retrieve infotainment contents styled to vehicle occupants’ features, MEC and DC can

Page 73: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 60

support self-driving cars. At the DC, we introduce a Multi-Layer Perceptron (MLP) ap-

proach for predicting both content ratings and the probability of content to be needed at

the edge of the network. We store our MLP prediction output near the self-driving cars at

RSUs. Then, each RSU can utilize the MLP output for identifying contents to cache nearby

the cars based on location, predicted content ratings, and probabilities to be needed at the

edge. Here, we select MLP over other approaches such as Moving Average (ARMA) and

AutoRegressive (AR) due to the fact that MLP can handle linear and non-linear problems

during the prediction process [91].

• For the self-driving car, to identify the infotainment contents to cache, it retrieves MLP

output from the near MEC server attached to RSU and compares both MLP and CNN output

using classification. We propose to use k-means and binary classification for comparing

and joining MLP and CNN output. Here, we select k-means and binary classification over

other classification approaches because they are computational not complex and easy to

implement [92, 93].

• We formulate a caching approach for infotainment contents in self-driving cars that uses

deep learning and exploits 4C capacities of MEC servers and self-driving cars as an opti-

mization problem that minimizes communication delay for downloading infotainment con-

tents. To solve the formulated optimization problem, our approach uses the Block Succes-

sive Majorization-Minimization (BS-MM) method introduced in [94]. We select BS-MM

method over other solution approaches due to the fact that BS-MM permits us to split our

formulated problem into subproblems, where each subproblem can be handled separately.

In addition, for solving subproblems, the BS-MM allows parallel computation.

4.3 System Model

Fig. 4.2 shows our system model of caching in the self-driving car using deep learning, which is

composed of the following components:

Data Center (DC): Based on demands that reach at DC, we consider that DC has the datasets

for prediction purposes. In addition, at DC, we can get datasets from data markets. At DC, we use

Page 74: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 61

Data center

Roadside Unit 1Roadside Unit R

CNN and MLP

Do

wn

loa

d C

NN

Mo

de

l

Wireless channel

Wired backhaul

MEC server (4C)

OBU (4C)

Do

wn

loa

d M

LP o

utp

ut

Input:

Ø Facial

images

ü Output:

User features Input:

Ø Content name, type

of content, age of

consumer, location

of consumer, gender

of consumer, rating

ü Output:

Ratings,

Probabilities of

contents to be

requested in

specific area

• Download CNN model and MLP output from RSU and

store them

• Use CNN model to predict passengers’ features

• Use CNN and MLP output to identify the infotainment

contents that meet passengers’ features

• Download and cache identified contents

Self-driving cars

• Download CNN model and MLP output from DC and

store them

• Use MLP output to identify the contents that have high

ratings and probabilities to be requested in its area

• Download and cache identified contents

MEC servers

Figure 4.2: The system model of caching in the self-driving car using deep learning.

facial images from the dataset and CNN model presented in Section 4.3.1.2 for predicting users’

features. In addition to CNN, we use MLP presented in Section 4.3.1.1 to predict the infotainment

content ratings and probabilities of contents to be needed at the edge of the network near the self-

driving cars. For minimize car-DC communication delay, we store at MEC servers the trained

and tested CNN model and MLP output at the edges of the network, i.e., at RSUs. We denote

N = 1, 2, . . . , N as a set of edge areas.

RoadSide Unit (RSU): We consider that the edge areas are equipped with RSUs and we denote

R as the set of RSUs. In addition, each RSU r ∈ R is connected to the DC through a wired

backhaul link of capacity ωr,DC . Each RSU r ∈ R is equipped with one MEC server for handling

4C. Hereafter, we utilize the terms MEC server and RSU interchangeably. Moreover, we consider

that each MEC server r ∈ R associated to RSU r ∈ R is equipped with computational resource

of capacity pr and caching storage of capacity cr.

Each RSU r ∈ R can retrieve CNN model and MLP output from DC using backhaul link.

Page 75: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 62

Then, by using MLP output, we consider that each MEC server attached to RSU r ∈ R can

identify, download, and cache infotainment contents which have both high predicted ratings and

probabilities of being needed/requested in its area. For infotainment contents, we denote I as

a set of infotainment contents, where S(i) is the size of infotainment content i ∈ I in terms

of megabytes. Furthermore, based on the requests that reach the MEC server for the infotainment

contents, each cached infotainment content can be served to the users as is or after being processed

for improving its quality or meeting the requested format. Here, we denote i as an infotainment

content before processing and i′ as an infotainment content after being processed. As an illustrative

example, let us consider an infotainment content with .mpeg format that may not be cached at the

MEC server. Instead, MEC server can have infotainment content i of the .avi format in cache

storage of the same content. Therefore, to satisfy the request for content i′, the MEC server can

compute or transcode content i to content i′ using computation resource.

Self-driving car: In our system model, we denote V as a set of V self-driving cars, where each

self-driving car v ∈ V needs to be connected to RSU r ∈ R for getting infotainment contents

via a wireless link of capacity ωv,r. Furthermore, we consider that self-driving cars are equipped

with OBUs for 4C, where infotainment contents that meet passenger’ features can be cached for

minimizing the end-to-end delay. In addition, each self-driving car v ∈ V is equipped with both

computation and caching resources of capacity pv and computation capability of cv, respectively.

Furthermore, to identify infotainment contents to retrieve via RSUs and cache in the self-driving

cars, we use the both CNN and MLP output, where each self-driving car v ∈ V retrieves CNN

model and MLP output from near MEC server attached to RSU. Then, the self-driving car uti-

lizes trained and tested CNN model to predict its occupants’ features from their facial images.

Here, we consider the age and gender features from facial images. Therefore, we assume that

each self-driving car v ∈ V has a camera that helps in taking facial images of the incoming pas-

sengers. After getting passengers’ features, the self-driving car v ∈ V utilizes the k-means and

binary classification algorithms presented in 4.3.1.3 on CNN and MLP output for identifying the

infotainment contents that are appropriate to its occupants’ features. Then, the self-driving car

retrieves the identified contents and cache them.

The rest of this section is organized as follows: In Section 4.3.1, we present deep learning and

Page 76: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 63

recommendation model. To retrieve the recommended contents, self-driving cars need to use com-

munication resources. Therefore, we present our communication model in Section 4.3.2. In addi-

tion, we described in details our caching model in Section 4.3.3 for downloaded contents. Thus,

cached contents can be processed to improve their qualities or transcoded to different formats, we

present the computation model for cached contents in Section 4.3.4. To join deep learning, rec-

ommendation, communication, caching, and computation models, we proposed a control model

Section 4.3.5.

4.3.1 Deep Learning and Recommendation Model

In this subsection, we discuss Multi-Layer Perceptron (MLP) to predict the probabilities of the

contents to be needed at the edge of the network. The output of MLP is contents that need to be

cached near to the self-driving cars. In addition, we present in details Convolutional Neural Net-

work (CNN) for predicting vehicle occupants’ features (age, gender, and location). Then, we join

and compare both CNN and MLP output. The output of the comparison is the recommendation of

the contents that need to be cached in the self-driving car near to the passengers.

4.3.1.1 Multi-Layer Perceptron (MLP) Model

To identify the contents to cache at RSUs near the self-driving cars, we use MLP at the DC. The

MLP helps in predicting both the content ratings and probabilities of the contents to be needed at

the edge of the network in the areas of RSUs. The input and output of our MLP are summarized

as follows:

• Input: At DC, we predict content rating using the labeled dataset presented in Section 2.5.

The dataset contains rating, content names, viewer’s gender, location, and age. We use this

information as an input of Long Short-Term Memory (LSTM) described in [96] [97] to

predict infotainment content ratings. Then, we feed these input with predicted ratings in

MLP for predicting the probabilities of contents to be needed at the edge of the network (at

RSU). We denote x = (x1, x2, . . . xM )T as input vector in which subscripts are used for the

features. Here, we use content names, predicted rating, viewers’ gender, age, and location

as features.

Page 77: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 64

• Output: MLP uses input for predicting the output. Here, we use y = (y1, y2, . . . yN )T to

denote output vector, where we use subscripts to represent the geographical areas of RSUs.

Furthermore, in the last layer, each one neuron is associated with one area n ∈ N in which

we predict the probabilities of contents to be needed at the edge area n ∈ N .

Since, we consider MLP to be an artificial neural network with multiple layers, to introduce

MLP, we start with a simple artificial neural network (ANN), which has one layer. In simple ANN,

we consider the output to be the weighted sum of input, where wnm is the weight for processing

input xm to output yn. The output yn can be mathematically defined as follows:

yn = f

(∑M

m=1wnmxm + bn

), (4.1)

where we use f(.) to denote activation function and bn to denote bias in which needs to be added

to a linear combiner∑M

m=1wnmxm.

The MLP is an extension of simple ANN with multiple layers, i.e., MLP contains more hidden

layers. In MLP, we consider that each hidden layers contains more units, i.e., neurons, where

l is used to represent the number of hidden layers of MLP, vector x to denote input, vectors

b(1), . . . ,b(l) to denote biases, matrices W(1), . . . ,W(l) to represent the weight associated to

each hidden layer, while vector y is used denote the output. The output y is mathematically give

by:

y = f(W(l) . . . f(W(2)f(W(1)x + b(1)) + b(2)) · · ·+ b(l)). (4.2)

Here, in all the layers except at the output layer, we use the following Rectified Linear Unit

(ReLU) activation function:

ym = max(0, xm),∀m. (4.3)

ReLU has good performance over other activation functions in terms of mitigating vanishing gra-

dient problem of MLP happens in the training process [98]. Here, in the last layer l, we apply

the softmax function as an activation function for squeezing the output y into probabilistic values.

Page 78: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 65

The softmax function softmax(y)(l) can be mathematically expressed as follows:

softmax(y)(l) =eyl∑Nn=1 e

yn, for l = 1, . . . , N. (4.4)

In our MLP, the last layer has N neurons, where each neuron n ∈ N corresponds to one area.

In other words, we have N geographical locations to deploy cache-enable RSUs/MEC servers for

caching contents that have high ratings and probability values.

For given input x, MLP tries compute the output y. Therefore, for effective training of our

MLP model, the weights w need to be well adjusted for predicting correct output y from input x.

In other words, weights w need to be well adjusted so that the error function can be minimized.

Furthermore, in our MLP, we use cross-entropy as an error function because MLP model needs

to group the contents needed at the edge in N geographical areas. Therefore, our problem is

classification problem, and cross-entropy is suitable error function for such kind of problem. The

cross-entropy error function A(y, y) is mathematically given by:

A(y, y) = −∑N

n=1yn log yn. (4.5)

Here, A(y, y) is used to compute the cross-entropy between the ground truth y and the estimated

class probabilities y. Therefore, we interpret the output of our MLP model as probabilities of the

contents needed at the edge in geographical areas of the RSUs. Finally, based on their geographical

areas of RSUs, we store the output of the MLP to the MEC servers attached to RSUs . The self-

driving cars can retrieve the MLP output with minimized delay rather than getting MLP output

from DC.

4.3.1.2 Convolutional Neural Network (CNN) Model

To get both age and gender of vehicles’ occupants from facial images, we use CNN [99] [100] and

IMDB-WIKI dataset described in [101]. We train and test the CNN model at DC. The workflow

of the CNN model is summarized as follows:

• Input: In our CNN model, we use k0 to denote input image, where each image has three-

dimensions composed of width, height, and the number of color channels. Here, we use

Page 79: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 66

RGB (red, green, and blue) color channel.

• Convolution layer: In this layer, at each neuron, we use filters on input regions for comput-

ing the output. In other words, each neuron is associated with the location regions of the

input. Then, the convolution layer makes a feature map kj through applying dot products

between local regions of input and weights. Here kj is used as a feature map produced after

applying convolution layer j.

• RELU layer: In this layer, for elementwise activation function, we use ReLU (max(0,kj)),

where the ReLU allows mitigating the problem of vanishing gradient during convolution.

• Max pooling layer: Thus, we have a high-dimensional matrix after applying convolution and

RELU layers, max pooling layer helps in downsampling operation of the high-dimensional

matrix.

• Fully-connected layer: We use the fully-connected layers to calculate the class scores as-

sociated with the facial image of the passenger, where the neurons of this layer are fully

connected to all previous neurons. Furthermore, for class scores of facial images, we have

two classes (male and female) related to gender and 101 classes (0 to 101) related to the age.

Here, for facial images classification, we have two (for age and gender) fully-connected lay-

ers.

• Softmax layer: Finally, we use the softmax activation function on the output of fully-

connected layers to squeeze the output as the probability values of classes (gender and age)

that associated with a facial image of passenger.

We store the trained and tested CNN model at MEC servers attached to RSUs near the self-

driving cars. This helps the self-driving car to retrieve the CNN model in close proximity at

RSU with minimized delay rather than retrieving it from the DC. Then, the self-driving car uti-

lizes downloaded CNN model to predict the age and gender of its occupants from facial images.

Here, we consider that the facial images of the vehicle’s occupants are automatically taken by

the vehicle’s camera. From facial images, the self-driving car extracts its occupant’s features in

Page 80: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 67

2. Cluster MLP outputs based on

users information

3. Predict passenger’ features

such as age and gender using CNN

4. Assign passengers to the

formulated clusters

5. Select top-ranked contents that

have high predicted

in each cluster

6. Download and cache the

selected contents

1. Download CNN Model and MLP

Outputs and store them

• Store CNN Model and MLP

Outputs

• Cache top-ranked contents that

have high predicted probabilities

to be requested in its area

Figure 4.3: Recommendation model for self-driving car.

which includes the position of eyes, mouth, nose, and chin. Then, the self-driving car utilize these

information to classify facial images into different age and gender classes.

4.3.1.3 Recommendation Model

Fig. 4.3 shows our recommendation model, where the self-driving car can retrieve the CNN

model and MLP output from RSU. The self-driving car uses both the CNN and MLP output for

identifying contents need to download and cache in its cache storage. Our recommendation model

for caching content in self-driving car is summarized below:

• Step 1: We consider that each self-driving car v ∈ V is connected to nearest RSU for

retrieving both the MLP output and CNN model.

• Step 2: The self-driving car v ∈ V use MLP output and k-means algorithm to create age-

based clusters, while binary classification is used to create gender-based clusters. Then, in

each cluster, self-driving car identifies the contents which have high predicted ratings and

probabilities to be requested in its area as an initial recommendation for the contents to

Page 81: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 68

download and cache in its cache storage.

• Step 3: Each self-driving car v ∈ V uses its camera to capture incoming passenger’s facial

image. Then, the self-driving car uses downloaded CNN model and facial image to predict

its occupant’s age and gender.

• Step 4: After using CNN to predict passenger features (age and gender), the self-driving car

computes the similarity of incoming vehicle’s occupant u ∈ U with the existing users in

both age and gender-based clusters. Then, self-driving car assigns each incoming passenger

u ∈ U to age and gender-based clusters.

• Step 5: After classifying the vehicle’ occupants in both age and gender-based clusters, the

self-driving car v ∈ V goes inside each age and gender-based cluster and identifies the

contents that have high predicted ratings and probabilities for being requested in its area as

a recommendation for contents to download and cache.

• Step 6: Finally, the self-driving car retrieves the recommended infotainment contents via

MEC servers attached to RSUs and caches them in its cache storage.

In our recommendation model for the self-driving car, we use the k-means algorithm for cre-

ating the age-based clusters of MLP output. In our model, we use yn to denote MLP output for

the edge area n ∈ N of RSU and X = yn to denote age input of the k-means approach. In our

proposal, the k-means algorithm is used to class the age data points X = x1, . . . , xU into K

clusters X1, . . . ,XK so that X1 ∪ X2 ∪ · · · ∪ XK = X . Here, we use K from 0 to 101 in which

corresponds to the number of age categories in dataset ( [112], where the clusters are disjoint

Xi ∩Xj = ∅, i 6= j. To assign the data points to the corresponding centroids, k-means minimizes

the following objective function:

minXjKj=1

K∑j=1

∑xu∈Xj

‖xu − xj‖2. (4.6)

We use xj to denote the centroid of cluster Xj , where Xj is given by:

xj =

∑xu∈Xj

xu

|Xj |. (4.7)

Page 82: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 69

We consider that the passengers in age-based clusters may have different genders. Therefore,

inside each age-based cluster, we cluster passengers based on gender. Therefore, for gender-based

clustering, we use binary classification [93], wherein each age-based cluster, we have two sub-

clusters, one sub-cluster denoted Gfemalej for females and another sub-cluster denoted Gmale

j for

males such that Xj = Gfemalej ∪ Gmale

j and Gfemalej ∩ Gmale

j = ∅. Furthermore, in each gender

based sub-cluster, the self-driving car selects the infotainment contents that have high predicted

ratings and probabilities to be requested in its area as a recommendation. In other words, the car

selects contents that are appropriate to the location, ages, and gender of the vehicle’s occupants. In

our caching approach, we consider that the self-driving car retrieves recommended infotainment

contents and cache them in descending order of predicted ratings and probabilities until the cache

storage cv can not accommodate any additional content(s) or there are no more contents to cache.

In this dissertation, we consider that the CNN model and MLP output are downloaded during

off-peak hours, where MEC servers attached to RSUs download and store them near the self-

driving cars. Therefore, during off-peak hours, the self-driving cars can retrieve the CNN model

and MLP output at RSUs. However, self-driving cars can download top-recommended contents

using communication resources anytime and without waiting off-peak hours. Therefore, we pro-

pose a communication model for retrieving recommended contents in the below Section 4.3.2.

4.3.2 Communication Model

At MEC server r, to download the infotainment contents that have high predicted ratings and

probabilities, it requires to use wired backhaul link. Here, we use a wired backhaul link of capacity

ωr,DC , where the transmission delay for retrieving infotainment contents from the cloud/DC can

be expressed as follows:

τDCr =

∑i∈Ir(n) q

DC→ri S(i)

ωr,DC. (4.8)

Furthermore, for n ∈ N , we use Ir(n) to denote a set of predicted infotainment contents that

needed at edge area n. Therefore, to minimize communication delay, these infotainment contents

need to be requested and cached in area n of the RSU r. In addition, we use qDC→ri to denote

a decision variable that specifies whether or not RSU/MEC server r should download the top

recommended infotainment content i∈ Ir(n) from the DC. The decision variable qDC→ri is defined

Page 83: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 70

Distance between

self-driving and RSU 1

Distance between

self-driving and RSU R

Figure 4.4: RSU selection process for self-driving car.

as follows:

qDC→ri =

1, if content i is downlaoded from DC,

0, otherwise.(4.9)

For the connection between self-driving cars and RSUs using the wireless channel, we assume

that each self-driving car v ∈ V rolls in an area covered by BSs. Furthermore, to get information on

available RSUs in routes of self-driving cars, Access Network Discovery and Selection Function

(ANDSF) deployed in the core of the cellular network is used [102] [104]. As illustrated in

4.4, to get RSU information such as coordinate and coverages, the self-driving car has to submit

an ANDSF request to the ANDSF server using BS of the cellular network. In ANDSF request

includes geographic location, direction, and speed of the self-driving car. On receiving ANDSF

request, the ANDSF server replies with the coordinates and coverage of RSUs implemented in the

direction area of the self-driving car. This can help the self-driving car to select the appropriate

RSUs that can be used for retrieving infotainment contents and do fast hand-off for having less

variation in the transmission time.

Page 84: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 71

Based on ANDSF server’s feedback, each self-driving car v can calculate the distance drv

between its route and each RSU r, where drv is given by:

drv = grvsinαrv, (4.10)

where αrv denotes the angle between the straight line from RSU r ∈ R to self-driving car v and

trajectory of the self-driving car v. In addition, we use grv to denote the geographical distance

between self-driving car v and RSU r. Furthermore, as described in [103], we consider grv and

αrv can be calculated using the Global Positioning System (GPS). Based on grv and αrv, each self-

driving car v can calculate the required distance dvr to reach an area of RSU r ∈ R as follows:

dvr = grvcosαrv. (4.11)

As described in [103], we use the following probability ρrv to select an RSU r among available

RSUs in the route of self-driving car v that will be used to download infotainment contents to

cached:

ρrv =

1, if drv = 0,

drvγr

if 0 < drv < γr,

0, otherwise,

(4.12)

where we use γr to denote the radius of the area coved by RSU r ∈ R. In addition, for self-driving

car v, to download recommended contents from RSU requires an active connection. Therefore, we

use qrv to denote a decision variable that specifies whether or not the self-driving car v is connected

to RSU r ∈ R, where qrv is mathematically expressed as follows:

qrv =

1, if ρrv > 0 and dvr = 0,

0, otherwise.(4.13)

Page 85: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 72

Both (4.12) and (4.13) guarantee that when the self-driving car v arrives an area n of RSU r ∈

R, it immediately connects and downloads the recommended contents that have high probability

value (4.12) from RSU using wireless channel of capacity ωv,r. Furthermore, at each time slot, we

consider that each self-driving car v ∈ V uses one channel, where the channel utilization is based

on time-division multiplexing [105]. Therefore, ωv,r is given by:

ωv,r = qrvBr log2(1 + ϕr|Grv|2

), ∀v ∈ V, r ∈ R. (4.14)

Here, we use Br to denote the bandwidth for RSU-car communications, Grv to denote the channel

gain between self-driving car v and RSU r, while ϕr is used to denote the scalar factor related to

the transmission power of RSU r. Furthermore, we can mathematically define the transmission

delay for retrieving content i from the RSU r as follows:

τ rv =

∑if ,im∈Ir(n) q

rv

(S(if ) + S(im)

)ωv,r

. (4.15)

We use if ∈ Gfemalej as the most requested content by female users. In addition, we use im ∈ Gmale

j

as the most requested content by male user in each cluster j, where if , im ∈ Ir(n).

For downloading contents, at self-driving car v ∈ V , we can define the time trv needed for

leaving an area covered by RSU r, where trv is given by:

trv =2qrvγrµv

. (4.16)

In trv, we use µv to denote the speed of self-driving car v in area of RSU r. Furthermore, if

τ rv < trv, the self-driving car does not immediately need to do a hand-off to another RSU. However,

if τ rv ≥ trv, the self-driving car can select the next RSU to use for downloading contents based on

probability value (4.12).

In a self-driving car, the goal of our caching approach for infotainment contents is to satisfy

the demands of the vehicle’s occupants for infotainment contents. Therefore, we consider that

each self-driving car v has a WiFi router that provides Internet access via WiFi to its passengers.

In addition, WiFi router’s channel resources are shared to vehicle’s occupants in contention-based

Page 86: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 73

fashion [106]. Furthermore, by using a WiFi router in self-driving car v, the instantaneous data

rate of each passenger u can be mathematically expressed as follows:

ψvu =qvuϕvψ

vuξvu(|Uv|)

|Uv|,∀u ∈ Uv, v ∈ Vv. (4.17)

In (4.17), we use ϕv to denote the throughput efficiency factor for WiFi router, and |Uv| to denote

the number of vehicle’s occupants that are simultaneously served by the WiFi router of self-driving

car v, such that Uv ⊂ U . In addition, we use ϕv to denote WiFi overhead for MAC protocol

layering. Moreover, we use ψvu to denote the maximum theoretical data rate that the WiFi router

can provide. We assume that ψvu w is known a priori and protocol dependent. Furthermore, we

use ξvu(|Uv|) to denote WiFi channel utilization function, where ξvu(|Uv|) depends on the number

of vehicle’s occupants connected to the WiFi router simultaneously [106]. Furthermore, we define

the following decision variable qvu that specifies whether or not vehicle’s occupant u is served or

connected to the WiFi router available in the self-driving v:

qvu =

1, if the passenger u is connected to the WiFi router of the self-driving v,

0, otherwise.(4.18)

Therefore, based on instantaneous data rate ψvu, each vehicle’s occupant u experienced the fol-

lowing transmission delay τvu for retrieving infotainment content i via or from self-driving car v:

τvu =

∑i∈Ir(n) q

vu

(S(if )) + S(im)

)ψvu

. (4.19)

4.3.3 Caching Model

To reduce the delay experienced by vehicle’s occupants in retrieving infotainment contents, we

propose infotainment contents caching in self-driving cars and RSUs. In our proposal, we consider

that each self-driving car v is equipped with limited cache storage cv. Consequently, the sum of

S(if )) an S(if )) for the recommended infotainment contents that meet passengers’ features to

download from or via near RSU v and cache in self-driving car v have to satisfy the following

Page 87: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 74

cache capacity constraint:

qrv

K∑j=1

∑if∈Gfemale

j

oifv S(if )) +

∑im∈Gmale

j

oimv S(im)

≤ cv, (4.20)

Based on our recommendation model described in the Section 4.3.1.3, where we consider that the

infotainment contents have to be cached based on the vehicle’s occupants features such as location,

age, and gender. Therefore, in each cluster j, we use oifv ∈ 0, 1 to denote a caching decision

variable which specifies whether or not self-driving car v should cache the infotainment content

if ∈ Gfemalej . In other words, in each age-based cluster j, we cache most requested content with

female gender in the area of self-driving car v.

oifv =

1, if the content if gets cached in the self-driving car v,

0, otherwise.(4.21)

Furthermore, we use oimv ∈ 0, 1 to denote be a decision variable which specifies whether or

not self-driving car v should cache the infotainment content im ∈ Gmalej . In other words, in each

age-based cluster j, we cache most requested content with male gender in the area of self-driving

car v.

oimv =

1, if the content im gets cached in the self-driving car v,

0, otherwise.(4.22)

To analyze the utilization of the cache storage cv, we use both cache hit and cache miss. In

addition, we consider that both if and im share the same cache storage cv of the self-driving car

v. Therefore, hereafter, we leave out the superscript and subscript on each infotainment content

i, and utilize i to represent any content if or im. Furthermore, we use hu→vi ∈ 0, 1 to define

the cache hit indicator of infotainment content i ∈ Ir(n) needed by vehicle’s occupant u ∈ U of

Page 88: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 75

self-driving car v:

hu→vi =

1, if content i needed by vehicle’s occupant u is retrieved from self-driving car v,

0, otherwise.

(4.23)

If the requested infotainment content i ∈ Ir(n) is not cached in the self-driving car v (hu→vi = 0),

the self-driving car v sends the request for infotainment content i to its associated RSU r using

wireless channel. Here, we consider at RSU r, the MEC server r caches in its cache storage cv

the contents that have high predicted ratings and probabilities of being needed at the edge, i.e.,

in the area n of RSU r. Therefore, caching at RSU r must satisfy the following cache capacity

constraint:

qDC→ri

∑i∈Ir(n)

oirS(i) ≤ cr, (4.24)

where we use oir to denote the cache decision variable which specifies whether or not an MEC

server r caches content i ∈ Ir(n), where oir is expressed as follows:

oir =

1, if content i ∈ Ir(n) gets cached at MEC server r,

0, otherwise.(4.25)

In addition, at RSU r, we use the cache hit indicator hr→vi ∈ 0, 1, where hr→vi is given by:

hr→vi =

1, if the content i needed by self-driving car v is cached at RSU r,

0, otherwise.(4.26)

Furthermore, we assume that both self-driving cars and MEC servers attached to RSUs have

limited cache capacities. Therefore, if there is no available cache storage due to the fullness of cv

or cr, the MEC server or self-driving car can use Least Frequently Used (LFU) cache replacement

policy [107] [80] to replace the least frequently used contents. Here, oir, oimv , and oifv are used

Page 89: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 76

in deciding which content to cache in cache storage. On the other hand, when the cache storage

is full, the LFU is used to decide on which content to replace in cache storage. However, when

the requested content i from the self-driving car is not cached in MEC server’s cache storage, the

MEC server sends the request to the DC.

4.3.4 Computation Model for Cached Content

In this dissertation, we consider both MEC servers and self-driving cars have computation re-

sources that can be used to compute or process cached contents on demand. As an illustrative

example, vehicle’ occupant may ask for an infotainment content format (e.g., .mpeg) that is not

cached in cache storage. However, in the cache storage, it may exist another format (e.g., .avi) of

the requested infotainment content. Therefore, to satisfy the demands of vehicle’ occupant, the

cached content can be computed or transcoded to the desired format using computation resource.

To serve cached content after computation, we use a computation decision variable hv→ui′ for

cache content, where hv→ui′ is defined as follows:

hv→ui′ =

1, if the requested content i′ by vehicle’s occupant u is returned

from self-driving car v after proceccing cached content i,

0, otherwise.

(4.27)

Furthermore, to ensure that self-driving car v sends exactly one format for the requested infotain-

ment content, we formulate the following constraint:

hu→vi + hv→ui′ ≤ 1. (4.28)

In addition, to convert content i to content i′ at self-driving car v require the utilization of compu-

tation resource pi→i′

v . Therefore, the allocation for the computational resource pi→i′

v is given by:

pi→i′

v = pvhu→vi %i→i

′v zi→i

′∑u∈U

∑i∈I h

u→vi %i→i′v zi→i′

, ∀v ∈ V. (4.29)

Page 90: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 77

Here, we use zi→i′

to denote computational intensity, which is measured in terms of CPU cycles

per bit, i.e., computational workload for converting cached content i to i′. In addition, we use

%i→i′

v to denote computation decision variable, where %i→i′

v is defined as follows:

%i→i′

v =

1, if the cached content i in self-driving car v is converted to the

requested format i′ .

0, otherwise.

(4.30)

For computational resources allocation defined (4.29), we applied weighted proportional allo-

cation [108]. We chose weighted proportional allocation over other techniques due to its simplicity

in implementation in various practical environments such as the communication systems like 4G &

5G cellular networks and Vehicular Ad-hoc Networks [64]. By applying weighted proportional al-

location [108] and based on computation workload requirement, each transcoding task is assigned

a fraction of the computational resource. Therefore, the following constraint must be satisfied

during computation resource allocation:

U∑u=1

Ir(n)∑i=1

qvuhu→vi %i→i

′v pi→i

′v ≤ pv. (4.31)

Furthermore, it takes a time to convert content i to content i′. Therefore, at self-driving car v, we

defines the execution time τ i→i′

v as follows:

τ i→i′

v =qvuh

u→vi %i→i

′v zi→i

′S(i)

pi→i′v

. (4.32)

Moreover, we consider if self-driving car v cannot satisfy constraint in (4.31) for converting info-

tainment content i into the requested infotainment content i′ due to limited computational resource,

the self-driving car sends the request of content i′ to the RSU.

For the received request for content i′, MEC server converts cached content i into content i′

using computation resource. Therefore, the execution time execution time τ i→i′

r for converting

Page 91: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 78

cached content i into content i′ at the MEC server r can be defined as follows:

τ i→i′

r = (1− %i→i′v )

(qrvh

r→vi %i→i

′r zi→i

′S(i)

pi→i′r

), (4.33)

where pi→i′

r is used to represent the required computational resource for converting content i to

content i′ at MEC server r. In addition, use %i→i′

r to denote computation decision variable at each

MEC server r, where %i→i′

r is mathematically expressed as follows:

%i→i′

r =

1, if the desired format i′ at MEC server is obtained by converting cached content i

0, otherwise.

(4.34)

At RSU r, to convert cached content i to content i′, we can define computational resource alloca-

tion pi→i′

r as follows:

pi→i′

r = pr(1− %i→i′v )

(%i→i

′r hr→vi zi→i

′)

∑v∈V

∑i∈I %

i→i′r hr→vi zi→i′

, ∀r ∈ R. (4.35)

Furthermore, we each MEC server is equipped with limited computation resource. Therefore,

during the computation resource allocation, the following constraint must be satisfied:

V∑v=1

Ir(n)∑i=1

qrvhr→vi %i→i

′r pi→i

′r ≤ Pr. (4.36)

For the requested content i′ from self-driving car v, we use hr→vi′ to denote a decision variable

which specifies whether or not the MEC server sends back the needed content i′ to the self-driving

car v after computation. hr→vi′ is expressed as follows:

hr→vi′ =

1, if the needed content i′ is sent back from MEC server r to the self-driving car v

after computation,

0, otherwise.

Page 92: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 79

(4.37)

4.3.5 Control Model

To ensure that converting cached content i to the requested content i′ is performed exactly at one

location, either at the self-driving car or at MEC server, and self-driving car or MEC server sends

exactly one format of content, we formulate the following constraints:

qvu(hu→vi + hv→ui′ ) + qrvηv(hr→vi + hr→vi′ ) ≤ 1, (4.38)

%i→i′

v + qrv(1− %i→i′

v ) ≤ 1. (4.39)

Here, we use ηv = 1− (hu→vi + hv→ui′ ). However, if the above constraints cannot be satisfied due

to limited computation and caching resources, MEC server submits the request for content i′ to the

remote cloud. Furthermore, for control, we combine the proposed deep-learning, communication,

caching, and computation model into one optimization problem in that will be discussed in the

Section 4.4 for minimizing total delay τTotu (q,h,%) for retrieving infotainment contents, where

τTotu (q,h,%) is given by:

τTotu (q,h,%) = τvuh

u→vi + hv→ui′ %i→i

′v τ i→i

′v +(

1− (hu→vi + %i→i′

v hv→ui′ ))

(τ rvhr→vi + τ i→i

′r %i→i

′r hr→vi′ )+

(1 − (hr→vi + %i→i′

r hr→vi′ ))τDCr , (4.40)

In the above equation, a requested infotainment content can be retrieved in the self-driving car.

However, if the content can not be retrieved in a self-driving car, the self-driving car sends a

request to RSU, where RSU can return the contents. In the worst case, if the content can not be

retrieved from self-driving car or RSU, DC can be used.

4.4 Problem Formulation and Solution

Here, for a self-driving car, we present in details our optimization problem and its solution.

Page 93: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 80

4.4.1 Problem Formulation

To join the proposed deep-learning, communication, caching, computation, and control models

into one optimization problem that minimizes the total delay τTotu (q,h,%) experienced by vehicle’s

occupants in retrieving infotainment contents, we formulate the following optimization problem:

minq,h,%

U∑u=1

τTotu (q,h,%) (4.41)

subject to:

V∑v=1

qrv ≤ 1, ∀r ∈ R, (4.41a)

U∑u=1

Ir(n)∑i=1

qvuhu→vi %i→i

′v pi→i

′v ≤ pv, ∀v ∈ V, (4.41b)

qrv

k∑j=1

(∑

if∈Gfemalej

oifv S(if )) +

∑im∈Gmale

j

oimv S(im)) ≤ cv, (4.41c)

qvu(hu→vi + hv→ui′ ) + qrvηv(hr→vi + hr→vi′ ) ≤ 1, (4.41d)

qvu%i→i′v + qrv(1− %i→i

′v ) ≤ 1. (4.41e)

In the formulate problem in (4.41), the constraint (4.41a) guarantees that the self-driving car needs

to be connected to RSU r ∈ R for retrieving infotainment contents. Furthermore, both constraints

(4.41b) and (4.41c) ensure that both computational and caching resource utilization must satisfy

computational and caching capacities constraints of each self-driving car v. Furthermore, con-

straints in (4.41b) and (4.41c) are based on deep leaning output, where the self-driving car caches

the contents based on location, age, and gender features. In addition, constraint (4.41d) guarantees

that the requested content must be returned from one location, either in the self-driving car or at

RSU. Furthermore, the constraint (4.41e) guarantees that computing cached infotainment content

i to the requested infotainment content i′ has to be performed at one location, either at self-driving

car v or MEC server r. In other words, constraints in (4.41d) and (4.41e) are related to the control.

The formulated problem (4.41) is not easy to solve because of its structure which is non-

Page 94: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 81

convex. Consequently, to transform (4.41) into convex optimization problem for solving it, we

apply a Block Successive Majorization Minimization (BS-MM) presented in [94] [69, 70] and

described in the next Section 4.4.2.

4.4.2 Proposed Solution: Distributed Optimization Control Algorithm

Our proposed solution for solving (4.41) is based BS-MM algorithm, where BS-MM is one of

the Majorization-Minimization (MM) approaches presented in [94]. Here, we prefer to use BS-

MM due to its simplicity in partitioning our formulated problem into blocks/subproblems. Then,

at each round, we can use BS-MM on one block of variables, and maintaining the values of the

other blocks of variables fixed until all block of variables are utilized. Furthermore, the BS-MM is

easy to implement using parallel computation, where each subproblem can be solved separately.

However, to make sure that all blocks of variables are used, index selection rules presented in

[64, 94] such as Gauss-Southwell, Cyclic, and Randomized can be utilized.

To apply BS-MM for solving the formulated optimization problem in (4.41), we use Q ,

q :∑U

u=1 qvu + qrv ≥ 1, qvu, qrv ∈ [0, 1], H , h :

∑Uu=1(h

u→vi + hv→ui′ ) +(

1− (hu→vi + hv→ui′ ))

(hr→vi + hr→vi′ ) ≤ 1, hu→vi , hv→ui′ , hr→vi , hr→vi′ ∈ [0, 1], and P , % :∑i,i′∈I %

i→i′v + (1− %i→i′v )%i→i

′r ≤ 1, %i→i

′v , %i→i

′r ∈ [0, 1] to be both non-empty and closed sets

of the variables q, h, and %, respectively. Here, q, h, and % is vectors of the relaxed variables.

Furthermore, to make our notation easy for (4.41), we use the following F(q,h,%) to represent

(4.41):

F(q,h,%) =U∑u=1

τTotu (q,h,%). (4.42)

In both (4.41) and (4.42), the constraints remain the same and unchanged. To solve (4.42), we use

the following two MM steps:

• Step 1: In the majorization process, first, we formulate a convex surrogate function which

is an upper-bound of the formulated problem in (4.42). We use Fj(q,h,%) to denote the

convex surrogate function.

• Step 2: The second step is related to minimization, where we have to minimize the formu-

Page 95: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 82

lated convex surrogate function Fj(q,h,%) rather than minimizing (4.42) which is difficult

to solve.

The success in solving the formulated problem in (4.42) depends on how we choose the surro-

gate function Fj(q,h,%). Therefore, it is preferable to choose a surrogate function which convex,

easy to handle, and upper-bound of (4.42). In other words, the surrogate function Fj(q,h,%) has

to follow the shape of (4.42). Therefore, in the fist step related majorization, we apply the prox-

imal minimization technique introduced in [94]. To make the surrogate function Fj(q,h,%), we

add a quadratic term (%j2 ‖(qj − q)‖2) to the formulate problem in (4.42) to make it convex. Here,

the convex surrogate function Fj(q,h,%) is given by:

Fj(qj ,q(t),h(t),%(t)) ..= F(qj , q, h, %) +αj2‖(qj − q)‖2, (4.43)

where the surrogate function (4.43) can be also applied the vectors of variables h and %. We use

q, h, % to denote initial feasible points. Therefore, due to its quadratic term (αj

2 ‖(qj − q)‖2), the

formulated surrogate function Fj(q,h,%) is convex optimization problem.

In the second step related to minimization, we minimize the convex surrogate function

Fj(q,h,%) by using Algorithm 4, where Fj(q,h,%) is am upper-bound function of (4.42). Fur-

thermore, we divide (4.43) into blocks, where we use J t as a set of indexes at each iteration t and

αj as a positive penalty parameter for j ∈ J t. Moreover, through using our proposed Algorithm

4, at each iteration t + 1, we need to solve the following subproblems for getting the updated

solution of (4.43):

q(t+1)j ∈ min

qj∈QFj(qj ,q(t),h(t),%(t)), (4.44)

h(t+1)j ∈ min

hj∈HFj(hj ,h(t),q

(t+1)j ,%(t)), (4.45)

Page 96: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 83

%(t+1)j ∈ min

%j∈PFj(%j ,%(t),q

(t+1)j ,h

(t+1)j ). (4.46)

To obtain the solution of (4.44), (4.45), and (4.46), we use the relaxed vectors qj , hj and %j .

Therefore, to make sure that the vectors qj , hj and %j are the vector binary variables, we use

rounding techniques introduced in [69]. As an example, let us consider qr∗v ∈ q(t+1)j as a solution

using relaxed variable and ϕ ∈ (0, 1) as a rounding threshold, to make the qr∗v as binary solution,

the following approach can be utilized:

qr∗v =

1, if qr∗v ≥ ϕ,

0, otherwise.(4.47)

However, as shown in [64, 70], the rounding technique can violate resource constraints (commu-

nication, caching, and computation resources). Therefore, to prevents the violation of resource

constraints, we need to solve Fj in the form Fj + βv∆v, where the constraints (3.27a), (3.27b),

and (3.27c) of Fj + βv∆v are formulated as follows:

V∑v=1

qrvarv ≤ 1 + ∆va , ∀r ∈ R, (4.48)

U∑u=1

Ir(n)∑i=1

qvuhu→vi %i→i

′v pi→i

′v ≤ pv + ∆vp ,∀v ∈ V, (4.49)

qrv

k∑j=1

(∑

if∈Gfemalej

oifv S(if )) +

∑im∈Gmale

j

oimv S(im)) ≤ cv + ∆vc , (4.50)

where ∆v = ∆va + ∆vc + ∆vp are used to denote the violation of resource constraints, i.e., ∆va

for the communication, ∆vc for caching, and ∆vp for computation resource. In addition, we use

Page 97: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 84

the weight parameter βv of ∆v for balancing resource constraint violation. The values of ∆va ,

∆vc , and ∆vp can be calculated as follows:

∆va = max0,V∑v=1

qrvarv − 1, ∀r ∈ R, (4.51)

∆vp = max0,U∑u=1

Ir(n)∑i=1

qvuhu→vi %i→i

′v pi→i

′v − pv, ∀v ∈ V, (4.52)

∆vc = max0, qrvk∑j=1

((∑

if∈Gfemalej

oifv S(if )) +

∑im∈Gmale

j

oimv S(im))− cv. (4.53)

To make sure that our rounding technique is well done, we need to evaluate it. For the evalua-

tion approach, we use the integrality gap introduced in [69] and defined below:

Definition 2 (Integrality gap) For the both formulated problems in Fj + βv∆v and Fj , the inte-

grality gap is mathematically defined as follows:

φj = minq,h,%

FjFj + βv∆v

, (4.54)

where we obtain the solution of Fj by using the relaxed vectors qj , hj and %j , while we obtain

the solution of Fj + βv∆v by applying the rounding technique to force the vectors qj , hj and %j

to take binary values. Therefore, the rounding technique is well done if φj ≤ 1, i.e., there is no

violation of 3C resource constraints.

By using BS-MM [94], we proposed a distributed optimization control algorithm (Algorithm

4) for solving both formulated problem Fj + βv∆v and Fj .

As a precondition, by using MLP output, we assume that the RSUs precache the contents based

on predicted ratings and probability values of contents to be requested in its area. In addition, we

assume that the k-means and binary classification are already used on MLP and CNN output

for finding the recommended infotainment contents that meet passengers’ features and need to

Page 98: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 85

Algorithm 4 : Distributed optimization control algorithm for joint deep learning and 4C.1: Input: R: A vector of available RSUs in the route of the self-driving car, ωrv: A vector of wire-

less capacities of RSUs, U: A vector of vehicle’s occupants, X : A vector of recommendedcontents for self-driving car v, U: A vector of vehicle’s occupants, cv, ψvu, pv;

2: Output: q∗, h∗, %∗;3: Initialization: t = 0;4: Use input and get initial feasible points for q(0), h(0), %(0);5: repeat6: Select index set J t;7: Compute q

(t+1)j ∈ min

qj∈QFj(qj ,q(t),h(t),%(t));

8: Set qt+1k = qtk, ∀k /∈ J t and solve min

qj∈QFj(qj ,q(t),h(t),%(t));

9: Compute h(t+1)j and %

(t+1)j , return to step 4, salve (4.45) and (4.46);

10: t = t+ 1;11: until lim

t→∞inf

q,h,%‖F (t+1)

j −F (t)j ‖2 = 0;

12: Use rounding technique to enforce q(t+1)j , h(t+1)

j , and %(t+1)j to have binary values;

13: Compute Fj + βv∆v and φj ;14: When φj ≤ 1, set q∗ = q

(t+1)j , h∗ = h

(t+1)j , and %∗ = %

(t+1)j to be a solution.

be cached in self-driving cars. To retrieve these infotainment contents and minimize the delay

experienced by its passengers in getting contents, the self-driving car solves Fj and Fj +βv∆v by

using Algorithm 4. For the input of Algorithm 4, we use a vector of vehicle’s occupants, a vector of

available RSUs in the route of the self-driving car, a vector of recommended contents that needed

in the self-driving car v to satisfy passengers’ demands, vector of wireless link capacities, pv, ψvu,

and cv. First, Algorithm 4 tries to get initial feasible points q = q(0), h = h(0), and % = %(0).

Then, the Algorithm 4 starts by selecting an index set J t at iteration t using index selection rules.

Then, the Algorithm 4 preforms iterations for solving (4.43) updates solutions through solving the

subproblems (4.44), (4.45), and (4.46) until limt→∞

infq,h,%‖F (t+1)

j − F (t)j ‖2 = 0. Furthermore, we

consider that Algorithm 4 reaches coordinate-wise minimum point q(t+1)j , h(t+1)

j , i.e., stationary

point of (4.43) when limt→∞

infq,h,%‖F (t+1)

j −F (t)j ‖2 = 0. Therefore, to make sure that q(t+1)

j , h(t+1)j ,

and %(t+1)j are the vectors of binary variables, Algorithm 4 applies the rounding technique, solves

Fj + βv∆v, and calculates φj . When φj ≤ 1, Algorithm 4 set q∗ = q(t+1)j , h∗ = h

(t+1)j , and

%∗ = %(t+1)j to be a solution of (4.43), which is coordinate-wise minimum point which does not

violate 3C resource constraints.

Page 99: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 86

Table 4.1: The route used by the self-driving car.Route Distance Max. speed RSUs1 54.62 109.016 1− 2

2 53.82 107.34 2− 3

3 54.02 108.17 3− 4

4 52.83 105.38 4− 5

5 55.66 111.33 5− 6

To analyze the convergence of our algorithm (Algorithm 4), which is based on the MM ap-

proach described in [94], we can formulate the following BS-MM remark:

Remark 2 (Convergence of Algorithm 4) By applying BS-MM algorithm, which is belongs to

the MM framework [94], the proposed Algorithm 4 converges to stationary point which is

coordinate-wise minimum point, only when the vectors of binary variables q(t+1)j , h

(t+1)j ,

and %(t+1)j cannot find a better minimum direction at each iteration t + 1. In other words

limt→∞

infq,h,%‖F (t+1)

j −F (t)j ‖2 = 0.

For the complexity analysis of our algorithm (Algorithm 4), we use complexity analysis of

Block Successive Upper-bound Minimization (BSUM) introduced in [68], where the BSUM be-

longs to MM framework. It is proved in [68] that BSUM algorithm has O(1/r) global sublinear

complexity for iteration index r. Therefore, we make the following second remark for the com-

plexity of the proposed Algorithm:

Remark 3 (The Algorithm 4 has sub-linear complexity) The proposed Algorithm 4 is based on

BS-MM and uses proximal upper-bound minimization approach. This makes Algorithm 4 to be

also a BSUM based algorithm. Therefore, the proposed Algorithm 4 has O(1/j) iteration com-

plexity for j ∈ J t [68].

4.5 Simulation Results

In this section, we discuss the simulation setup and simulation results. Furthermore, for perfor-

mance evaluation of our caching approach for self-driving cars which is based on deep learning,

Page 100: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 87

Figure 4.5: RSUs deployment, where each RSU has one MEC servers for 4C.

use Keras with Tensorflow [111] for the deep learning, pandas [74] for data analysis, and Google

Maps Services [110] for mobility analysis.

4.5.1 Simulation Setting

In our simulation setup, we use LSTM described in [96] [97] for content ratings and MLP pre-

sented in Section 4.3.1.1 to predict the probabilities of infotainment contents to be needed at the

edge, i.e., in specific areas of RSUs. For the dataset, we use Movie-Lens dataset described in [112],

where the dataset has movies information such as movie id, titles, type of movies, and release

dates. However, in the dataset, there is no information related to movie sizes and formats. There-

fore, because our caching approach for self-driving cars depends on content sizes, where movies

need to be cached must meet cache capacity constraint, we generate movie sizes randomly, where

the size of each movie i is set to be in the range between S(i) = 317 and S(i) = 750 Mb. In

addition, we randomly assign a format (e.g., .avi or .mpg) to each movie i. Furthermore, Movie-

Lens dataset has users related information such as gender, age, rating, and ZIP codes. We use

this information with an LSTM model for predicting if a user might like or not like each movie i,

where our model has one input, one hidden, and one output layers. In the input layer, we use 100

neurons, in the hidden layer, we use 10 neurons, while in the output layer, we use two neurons.

In other words, in the output layer, we calculate the probability that defines whether or not each

Page 101: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 88

Figure 4.6: Minimization of loss function for predicting movies that needed at the edge (at theRSUs).

user to like a movie i. We use probability value as a movie rating, then feeds ratings as an input

of MLP. In our MLP, we use input and output layer and 2 hidden layers, where MLP is used to

predict the probabilities of infotainment contents to be needed at the edge in close proximity to

self-driving cars. Therefore, to find the areas to deploy RSUs, we convert the ZIP codes from the

dataset into latitude and longitude coordinates. We deploy RSUs based on the movie watching

counts and the location of users. As illustrated in Fig. 4.5, through using the k-means algorithm,

we use 6 areas to deploy RSUs, where each RSU r has one MEC server.

For the mobility analysis, we use the Google Maps service [110] to find the distance in terms of

kilometers and duration in terms of time for reaching each RSU r ∈ R. Furthermore, we consider

that the duration depends on traffic conditions. Furthermore, we use both duration and distance to

compute the speed of the self-driving car in terms of km/h. Based on speed, the self-driving car can

select an appropriate RSU r ∈ R that will be used for downloading the recommended contents.

However, through using Google Maps service, we came to find that the distances between RSUs

are large. Therefore, we revise the RSU geographical locations and prepare a new routing table for

having realistic distances between RSUs. We summarize our routing table in Table 4.1, where we

consider that the self-driving car begins the trip at RSU 1 and finishes it at RSU 6. As illustrated

in Section 4.3.2, for having a less variation in delay for retrieving the recommended infotainment

contents, we consider that the self-driving car has to select RSUs available in its route that will be

Page 102: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 89

Figure 4.7: An example of the top 8 movies that need to be cached at RSU 1.

used for downloading infotainment contents at the beginning of its trip. This helps the self-driving

car to immediately start downloading the recommended infotainment contents when it reaches an

area of the selected RSU r ∈ R. In addition, we consider the requests for infotainment contents

follows the Zipf distribution described in [113]. Furthermore, each RSU r ∈ R is connected to the

DC using backhaul link of capacity ωr,DC , where ωr,DC is set to be in the range from ωr,DC = 60

to ωr,DC = 70 Mbps. In addition, we set a bandwidth of ωv,r = 10 MHz for each RSU r ∈ R.

In terms of computation and caching capacities, for each MEC server r ∈ R, we set a CPU’s

capacity pr = 3.6 GHz and cache storage of capacity cr in the range from cr = 100 to cr = 110

TB. Furthermore, in a self-driving car, we set the number of vehicle’s occupants to be |Uv| = 37

and use synthetic vehicle occupants’ demands for contents. In addition, for the communication

resource of the self-driving car, we set a bandwidth of capacity 160 MHz using 802.11ac, where

the theoretical maximum data rate is set to be equals ψvu = 3466.8 Mbps. Moreover, for the

computation and caching capacities of the self-driving car v, we set pv = 3.6 GHz and cv = 100

TB.

Page 103: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 90

Figure 4.8: Age and gender-based clustering for passengers in a self-driving car.

4.5.2 Simulation Results

We use the LSTM model, users’ information, and movie ratings to predict whether a given user

u may like or not like a given movie i. In LSTM model, the output layer has two neurons, where

the first neuron is used to predict whether a user u might like a movie i, while the second neuron

is used to predict whether a user u might not like a movie i. In the prediction process, we set the

size of training dataset to be 53% of the whole dataset and the testing dataset to be 47% of the

whole dataset. Furthermore, learning rate is set to be equal to 0.002, while the batch size equals to

250. Furthermore, to evaluate our prediction, we use Mean Absolute Error (MAE), where MAE

measures the average size of the errors. Therefore, to minimize the error of our prediction, we

minimize MAE. The simulation results for MAE minimization demonstrates that our prediction

has 79.3% accuracy. Furthermore, for video ratings, we use the output of the LSTM as an input

of MLP, where the MLP is used to predict the probabilities of infotainment contents needed at the

edge in the areas of 6 RSUs, i.e., in close proximity of the self-driving cars. In MLP, we set the

training dataset to be 60% of the whole dataset, while the testing dataset is set to be 40% of the

whole dataset. Furthermore, in MLP, learning rate is set to be equal to 0.002, while the batch size

equals to 32.

In the RSUs, caching is based on location and content ratings. We use MLP to predict movies

to cache at MEC servers/ RSUs for minimizing both latency and backhaul traffic. Fig. 4.6 shows

Page 104: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 91

Figure 4.9: An example of top the 8 recommended movies to cache in a self-driving car.

cross-entropy loss function minimization, where our proposal achieved 98.14% accuracy. For

caching movies at MEC severs associated with RSUs, each RSU v ∈ V caches movies in descend-

ing order of predicted rating and probability values as far as the cache storage is not full. As an

illustrative example, in Fig. 4.7, we show the top 8 movies that need to be cached at MEC server

associated with RSU 1. Furthermore, the self-driving car caches movies based on vehicle occu-

pants’ features. Here, we use two features: gender and age. First, the self-driving car retrieves

MLP output via MEC server attached to RSU. Second, the self-driving car makes age-based clus-

ters of the MLP output using the k-means algorithm. Third, inside each age-based cluster, the

self-driving car makes gender-based clusters using binary classification. Furthermore, for age-

based clustering, we use 8 age-based clusters ( [0 → 9, 10 → 19, 20 → 29, 30 → 39, 40 →

49, 50 → 59, 60 → 69, 70 → 79]). Fourth, by using facial images captured by vehicle’s camera,

the self-driving car uses CNN model downloaded from RSU to predict both age and gender of

vehicle’s occupants. Then, the self-driving classifies the vehicle’s occupants into the formed age

and gender-based clusters using k-means and binary classification. The simulation results in Fig.

4.8 show both k-means and binary classifications, where each age-based cluster is composed of

Page 105: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 92

Figure 4.10: Normalized cache hits for the self-driving car.

Figure 4.11: Total delay minimization problem.

male and female sub-clusters. Finally, inside both age and gender-based clusters, the self-dring

car identifies the movies that have high ratings and probabilities of being requested as a recom-

mendation for contents need to cache in the self-driving car. In Fig. 4.9, we give an illustrative

example of the top 8 movies recommended to cache in a self-driving. However, the self-driving

car can cache many movies, but for easy illustration, we present only the top 8 movies.

We consider that distances between the RSUs that have MEC servers are large. Therefore,

if the self-driving car does not have access to RSU that has a MEC server, vehicle’s occupants

can get infotainment contents (example: movies) from the cache storage of the self-driving car.

Furthermore, based on the demands from the vehicle’s occupants, we show the normalized cache

Page 106: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 93

hits in Fig. 4.10 for cached movies in self-driving car. The results in Fig. 4.10 shows that the

cache hits go up with Zipf parameter. In other words, the videos with high demands have also

high probabilities of being needed at the edge and cached, which results in high cache hit rates.

However, when the self-driving car does not have the requested movies (cache misses) in cache

storage, vehicle’s occupants can retrieve movies at the RSU or DC.

Since our caching approach for self-driving cars aims to minimize the total delay experienced

by vehicle’s occupants in retrieving contents, Fig 4.11 shows the solution of the proposed op-

timization problem, i.e., surrogate function (4.43). The simulation results in this figure show

that our formulated problem converges to a stationary point which is the coordinate-wise mini-

mum point. At a coordinate-wise minimum point, by using the surrogate upper-bound function

in (4.43), our proposed algorithm cannot get a better minimum direction to continue its iterative

process, i.e, the coordinate-wise minimum point is a stationary point. Furthermore, the choice of

αj has effects on the surrogate function Fj(q,h,%) in terms of both the size of the problem and

convergence speed. n

4.6 Summary

As an application of the joint 4C framework, in this chapter, we proposed a new caching approach

for self-driving cars using deep learning deployed in MEC. In our approach, we use LSTM to

predict content ratings. Then, we used LSTM output as an input of MLP, where we used MLP to

predict the probabilities of contents to be needed at the edge and cached in areas of RSUs. Then,

we deployed the output of MLP near to the self-driving cars at RSUs. Based on MLP output, each

MEC server attached to RSU retrieved the contents that have high predicted ratings and proba-

bilities values and cache them. Furthermore, to cache infotainment contents that are suitable to

the features of the vehicle’s occupants, we used CNN to predict user features (age and gender)

at DC and deployed trained and tested CNN model at RSUs, where the self-driving can retrieve

both the CNN model and MLP output. The self-driving car used CNN to predict passengers’

features, where CNN output was combined with the MLP output by using both k-means and bi-

nary classification algorithms for identifying the infotainment contents to download and cache. To

retrieve the recommended contents to cache, self-driving cars and roadside units need to use com-

Page 107: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 4. DEEP LEARNING BASED CACHING FOR SELF-DRIVING CARS IN MULTI-ACCESS EDGECOMPUTING 94

munication resources. Therefore, in addition to deep learning and caching models, we proposed a

communication model. Furthermore, to effectively utilize communication, caching, and computa-

tion resources, we propose a control model. To join the proposed deep-learning, communication,

caching, computation, and control models into one problem, we formulated an optimization prob-

lem that minimizes the total delay τTotu (q,h,%) experienced by vehicle’s occupants in retrieving

infotainment contents. Then, we applied the Block Successive Majorization Minimization ap-

proach for solving the formulated problem. The simulation results of our caching approach show

that our proposal can reduce content downloading delay once implemented in both self-driving car

and MEC server environments.

Page 108: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Chapter 5Conclusion and Future Directions

In this chapter, we present a conclusion and some future directions for our research in MEC.

5.1 Conclusion

In this dissertation, we have investigated collaboration spaces for big data MEC that allow MEC

servers to collaborate and handle big-data stemming from edges devices, where not only the people

who generate the data but also the machines/things. To deal with the big data from edge devices in

terms of communication, computation, caching, and control, we proposed a join 4C in collabora-

tive big data MEC. In addition, as an application of the joint 4C framework, we proposed caching

approach for self-driving cars, where caching decisions depends on passengers’ features learned

via deep learning deployed in MEC.

First, we proposed an Overlapping k-Means Method for Collaboration Space (OKM-CS),

where MEC servers collaborate through sharing tasks, data, and resources information. Collabora-

tion among MEC servers located in very close location helps in reducing data and tasks exchange

between edge devices and remote clouds and minimizing delay. We conducted an extensive simu-

lation using realistic base stations dataset. The simulation results show that our approach performs

well when the number of collaboration spaces is selected depending on the size of network topol-

ogy rather than choosing it randomly or using elbow method.

Second, we proposed a join 4C in collaborative MEC. Then, we formulated our join 4C frame-

work as an optimization problem for maximizing bandwidth saving and minimizing network la-

tency subject to communication, computation, and caching constraints. Because the formulated

problem was intractable due to its non-convexity structure, we proposed an approximation prob-

lem of the formulated problem, which is easy to salve using BSUM method. To apply the BSUM

95

Page 109: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 5. CONCLUSION AND FUTURE DIRECTIONS 96

method, we developed a distributed optimization control algorithm. For the performance eval-

uation, we perform extensive numerical analysis and compare our proposed algorithm with the

Douglas-Rachford splitting method. The results demonstrated that our approach performs well

over the Douglas-Rachford splitting method in terms of delay and computational resource utiliza-

tion.

Finally, as an application of the 4C framework, we proposed caching approach for self-driving

cars, where caching decision and content retrieval are based on passengers’ features obtained using

deep learning and available communication, caching, and computation resources. We formulated

an optimization problem for our caching approach of self-driving cars that minimizes total delay

for retrieving contents subject to communication, computation, and caching resources. Thus, the

formulated problem was not easy to salve because it was not convex. We proposed the proxi-

mal upper-bound problem (which is convex) of the formulated problem. Then, we applied the

Block Successive Majorization-Minimization (BS-MM) method to salve it. Through using re-

alistic dataset, the simulation and numerical results of the proposed caching approach show the

good performance in terms of prediction accuracy and easy implementation, where our approach

can easily be implemented in both MEC server and self-driving car environment for minimizing

infotainment content downloading delay.

5.2 Future Directions

Wireless edge devices are increasing at a very rapid pace, in which they generate various data

with different characteristics and requirement. However, wireless users’ devices and things are

characterized by limited resources. Therefore, for mission-critical and delay sensitive applications,

edge devices need to be supported by MEC servers. The proposed join 4C in collaboration big

data MEC can be one the solutions to deal with big data from edge devices. However, there are

still many challenges need to be addressed. Therefore, we consider the followings open issues for

our future research directions.

• Due to limited time, in this dissertation, we focus on collaboration among MEC servers

located in the same collaboration space, i.e., one collaboration space. Therefore, the collab-

Page 110: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

CHAPTER 5. CONCLUSION AND FUTURE DIRECTIONS 97

oration of MEC servers belongs to different collaboration spaces still an open issue for our

future research.

• The self-driving cars are sensitive to the delay. Consequently, rather than focus on remote

cloud utilization, MEC should be considered as an appropriate technology to support the

self-driving cars. Therefore, self-driving car’s offloading and edge analytics with the join 4C

of MEC need to be more investigated for improving autonomous driving and its associated

services.

Page 111: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Bibliography

[1] H. Jin, L. Su, D. Chen, K. Nahrstedt, and J. Xu, “Quality of information aware incentive

mechanisms for mobile crowd sensing systems,” in Proceedings of the 16th ACM International

Symposium on Mobile Ad Hoc Networking and Computing. ACM, pp. 167–176, 22 - 25 Jun.

2015 (Hangzhou, China).

[2] Knud Lasse Lueth, “State of the IoT 2018: Number of IoT devices

now at 7B Market accelerating,” https://iot-analytics.com/

state-of-the-iot-update-q1-q2-2018-number-of-iot-devices-now-7b/,

[Online; accessed 13 Apr. 2019].

[3] E. Zeydan, E. Bastug, M. Bennis, M. A. Kader, I. A. Karatepe, A. S. Er, and M. Debbah, “Big

data caching for networking: Moving from cloud to edge,” IEEE Communications Magazine,

vol. 54, no. 9, pp. 36–42, 16 Sep. 2016.

[4] S. Ranadheera, S. Maghsudi, and E. Hossain, “Computation offloading and activation of mo-

bile edge computing servers: A minority game,” IEEE Wireless Communications Letters, 28

Feb. 2018.

[5] A. Ferdowsi, U. Challita, and W. Saad, “Deep learning for reliable mobile edge analytics in

intelligent transportation systems,” arXiv preprint arXiv:1712.04135, 12 Dec. 2017.

[6] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge computinga key

technology towards 5G,” ETSI White Paper, vol. 11, no. 11, pp. 1–16, 5 Sep. 2015.

98

Page 112: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 99

[7] M. Patel, B. Naughton, C. Chan, N. Sprecher, S. Abeta, A. Neal et al., “Mobile-edge comput-

ing introductory technical white paper,” White Paper, Mobile-edge Computing (MEC) Industry

Initiative, Sep. 2014.

[8] O. Semiari, W. Saad, S. Valentin, M. Bennis, and H. V. Poor, “Context-aware small cell net-

works: How social metrics improve wireless resource allocation,” IEEE Transactions on Wire-

less Communications, vol. 14, no. 11, pp. 5927–5940, 13 Jul. 2015.

[9] T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative mobile edge computing in

5G networks: New paradigms, scenarios, and challenges,” IEEE Communications Magazine,

vol. 55, no. 4, pp. 54–61, 14 Apr. 2017.

[10] A. Ndikumana, S. Ullah, T. LeAnh, N. H. Tran, and C. S. Hong, “Collaborative cache al-

location and computation offloading in mobile edge computing,” in Proceedings of 19th IEEE

Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 366–369, 27-

29 Sep. 2017 (Seoul, South Korea).

[11] K. Dutta and M. Jayapal, “Big data analytics for real time systems,” in Proceedings of Big

Data analytics seminar, pp. 1–13, 11 Nov. 2015 (RWTH Aachen University, Germany).

[12] S. Kekki, W. Featherstone, Y. Fang, P. Kuure, and A. Li, “MEC in 5G networks,” ETSI

White Paper No. 28, ISBN No. 979-10-92620-22-1, Jun. 2018.

[13] E. Ahmed and M. H. Rehmani, “Mobile edge computing: opportunities, solutions, and chal-

lenges,” Future Generation Computer Systems, vol. 70, May 2017.

[14] G. Cleuziou, “An extended version of the k-means method for overlapping clustering,” in

Proceedings of the 19th IEEE International Conference on Pattern Recognition (ICPR), pp.

1–4, 08-11 Dec. 2008 (Tampa, FL, USA).

[15] Z. Han, M. Hong, and D. Wang, “Signal processing and networking for big data applica-

tions,” Cambridge University Press, 2017.

Page 113: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 100

[16] E. Bastug, M. Bennis, E. Zeydan, M. A. Kader, I. A. Karatepe, A. S. Er, and M. Debbah, “Big

data meets telcos: A proactive caching perspective,” Journal of Communications and Networks,

vol. 17, no. 6, pp. 549–557, Dec. 2015.

[17] E. Bastug, K. Hamidouche, W. Saad, and M. Debbah, “Centrality-based caching for mobile

wireless networks,” in 1st KuVS Workshop on Anticipatory Networks, 29 Sep. - Sep. 30 2014

(Stuttgart, Germany).

[18] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Machine learning for wire-

less networks with artificial intelligence: A tutorial on neural networks,” arXiv preprint

arXiv:1710.02913, 9 Oct. 2017.

[19] W. Fan, Y. Liu, B. Tang, F. Wu, and H. Zhang, “Terminalbooster: Collaborative computation

offloading and data caching via smart basestations,” IEEE Wireless Communications Letters,

vol. 5, no. 6, pp. 612–615, 2 Sep. 2016.

[20] Sami Kekki, Alex Reznik, “3GPP enables MEC over a 5G core,” ETSI ISG MEC, 4

Jul. 2018, https://www.3gpp.org/news-events/partners-news/1969-mec,

[Online; accessed 5 May 2019].

[21] A. Ndikumana, N. H. Tran, T. M. Ho, Z. Han, W. Saad, D. Niyato, and C. S. Hong, “Joint

communication, computation, caching, and control in big data multi-access edge computing,”

IEEE Transactions on Mobile Computing, 29 Mar. 2019.

[22] Yun Chao Hu, Milan Patel, Dario Sabella, Nurit Sprecher, and Valerie Young, “Mobile edge

computing: A key technology towards 5G, ETSI White Paper No. 11, Sophia Antipolis, France,

Sep. 2015.

[23] John Craig, Travis Broughton, “The New Intelligent Edge Akraino Edge Stack Project

Overview,”, May 2018. https://www.openstack.org/assets/summits/24/

presentations/21275/slides/Akranio-OverviewOpenStackv2.pdf, [On-

line; accessed 5 May 2019].

[24] “Akraino & Starlingx: a technical overview,” Linux Foundation, https:

//events.linuxfoundation.org/wp-content/uploads/2017/11/

Page 114: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 101

Akraino-Technical-Overview-OSS-Shane-Wang.pdf, [Online; accessed 5

May 2019].

[25] “Multi-access Edge Computing (MEC),” Nokia, https://networks.nokia.com/

solutions/multi-access-edge-computing, [Online; accessed 5 May 2019].

[26] “Multi-Access Edge Computing At-a-Glance,” Cisco, Document ID:1539975549149960

of 18 Oct. 2018, https://www.cisco.com/c/en/us/solutions/

collateral/service-provider/ultra-services-platform/

at-a-glance-c45-741450.html, [Online; accessed 5 May 2019].

[27] G. Cleuziou, “An extended version of the k-means method for overlapping clustering,” in

Proceedings of the 19th IEEE International Conference on Pattern Recognition (ICPR), 08-11

Dec. 2008 (Tampa, FL, USA), pp. 1–4.

[28] G. Lee, W. Saad, and M. Bennis, “An online secretary framework for fog network formation

with minimal latency,” arXiv preprint arXiv:1702.05569, 7 Apr. 2017.

[29] T. X. Tran, P. Pandey, A. Hajisami, and D. Pompili, “Collaborative multi-bitrate video

caching and processing in mobile-edge computing networks,” in Proceedings of 13th IEEE

Annual Conference on Wireless On-demand Network Systems and Services (WONS), pp. 165–

172, 21-24 Feb. 2017 (Jackson, WY, USA).

[30] M. Chen, Y. Hao, M. Qiu, J. Song, D. Wu, and I. Humar, “Mobility-aware caching and

computation offloading in 5G ultra-dense cellular networks,” Sensors, vol. 16, no. 7, p. 974, 25

Jun. 2016.

[31] F. Zhang, C. Xu, Y. Zhang, K. Ramakrishnan, S. Mukherjee, R. Yates, and T. Nguyen, “Edge-

buffer: Caching and prefetching content at the edge in the mobilityfirst future internet architec-

ture,” in Proceedings of IEEE 16th International Symposium on a World of Wireless, Mobile,

and Multimedia networks (WoWMoM), pp. 1–9, 14-17 Jun. 2015 ( Boston, MA, USA).

[32] X. Vasilakos, V. A. Siris, and G. C. Polyzos, “Addressing niche demand based on joint mo-

bility prediction and content popularity caching,” Computer Networks, vol. 110, pp. 306–323,

2016.

Page 115: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 102

[33] W. Jiang, G. Feng, and S. Qin, “Optimal cooperative content caching and delivery policy for

heterogeneous cellular networks,” IEEE Transactions on Mobile Computing, vol. 16, no. 5, pp.

1382–1393, 03 Aug. 2016.

[34] H. Hsu and K.-C. Chen, “A resource allocation perspective on caching to achieve low la-

tency,” IEEE Communications Letters, vol. 20, no. 1, pp. 145–148, 09 Nov. 2015.

[35] Z. Tan, X. Li, F. R. Yu, L. Chen, H. Ji, and V. C. Leung, “Joint access selection and resource

allocation in cache-enabled HCNs with D2D communications,” in Proceedings of IEEE Wire-

less Communications and Networking Conference (WCNC), pp. 1–6, 19-22 Mar. 2017 (San

Francisco, CA, USA).

[36] M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Debbah, and C. S. Hong, “Caching in the

sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-

of-experience,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 5, pp. 1046–

1061, 09 Mar. 2017.

[37] Y. Zhou, F. R. Yu, J. Chen, and Y. Kuo, “Resource allocation for information-centric vir-

tualized heterogeneous networks with in-network caching and mobile edge computing,” IEEE

Transactions on Vehicular Technology, vol. 66, no. 12, pp. 11 339–11 351, 09 Aug. 2017.

[38] C. Wang, C. Liang, F. R. Yu, Q. Chen, and L. Tang, “Computation offloading and resource

allocation in wireless cellular networks with mobile edge computing,” IEEE Transactions on

Wireless Communications, vol. 16, pp. 4924–4938, 16 May 2017.

[39] R. Huo, F. R. Yu, T. Huang, R. Xie, J. Liu, V. C. Leung, and Y. Liu, “Software defined net-

working, caching, and computing for green wireless networks,” IEEE Communications Maga-

zine, vol. 54, no. 11, pp. 185–193, 15 Nov. 2016.

[40] J. Chakareski, “VR/AR immersive communication: Caching, edge computing, and trans-

mission trade-offs,” in Proceedings of the ACM Workshop on Virtual Reality and Augmented

Reality Network, pp. 36–41, 21-25 Aug. 2017 (Los Angeles, CA, USA).

[41] Y. Cui, W. He, C. Ni, C. Guo, and Z. Liu, “Energy-efficient resource allocation for cache-

assisted mobile edge computing,” arXiv preprint arXiv:1708.04813, 16 Aug. 2017.

Page 116: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 103

[42] Chen M, Hao Y, Hu L, Hossain MS, Ghoneim A, “Edge-CoCaCo: Toward joint optimization

of computation, caching, and communication on edge cloud,” IEEE Wireless Communications,

vol. 25, no. 3, pp. 21–27, 4 Jul. 2018.

[43] Markakis EK, Karras K, Sideris A, Alexiou G, Pallis E, “Computing, caching, and commu-

nication at the edge: the cornerstone for building a versatile 5G ecosystem,” IEEE Communi-

cations Magazine, vol. 55, no. 11, pp. 152–157, 17 Nov. 2017.

[44] Wang X, Han Y, Wang C, Zhao Q, Chen X, Chen M,“In-edge AI: Intelligentizing mo-

bile edge computing, caching and communication by federated learning,” arXiv preprint

arXiv:1809.07857, 19 Sep. 2018.

[45] He Y, Liang C, Yu FR, Leung VC, “Integrated Computing, Caching, and Communication

for Trust-Based Social Networks: A Big Data DRL Approach,” in Proceedings of IEEE Global

Communications Conference (GLOBECOM), pp. 1–6, 9-13 Dec. 2018 (Abu Dhabi, United

Arab Emirates, United Arab Emirates).

[46] A. Ndikumana, N. H. Tran, T. M. Ho, Z. Han, W. Saad, D. Niyato, and C. S. Hong, “Joint

communication, computation, caching, and control in big data multi-access edge computing,”

arXiv preprint:1803.11512, 30 Mar. 2018.

[47] S. Zhang, N. Zhang, X. Fang, P. Yang, and X. S. Shen, “Cost-effective vehicular network

planning with cache-enabled green roadside units,” in Proceedings of IEEE International Con-

ference on Communications (ICC), pp. 1–6, 21-25 May 2017 (Paris, France).

[48] Z. Hu, Z. Zheng, T. Wang, L. Song, and X. Li, “Roadside unit caching: Auction-based stor-

age allocation for multiple content providers,” IEEE Transactions on Wireless Communications,

vol. 16, no. 10, pp. 6321–6334, 11 Jul. 2017.

[49] F. Chen, D. Zhang, J. Zhang, X. Wang, L. Chen, Y. Liu, and J. Liu, “Distribution-aware

cache replication for cooperative road side units in vanets,” Peer-to-Peer Networking and Ap-

plications, pp. 1–10, 05 Jul. 2017.

[50] L. Divine, J. Kurihara, and D. Kryze, “Auto-control of vehicle infotainment system based on

extracted characteristics of car occupants,” US Patent App. 13/192,629, 31 Jan. 2013.

Page 117: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 104

[51] I. Raichelgauz, K. Odinaev, and Y. Y. Zeevi, “System and method for caching concept struc-

tures in autonomous vehicles,” US Patent App. 15/677,496, 1 Mar. 2018.

[52] J. Ma, J. Wang, G. Liu, and P. Fan, “Low latency caching placement policy for cloud-based

vanet with both vehicle caches and rsu caches,” in Proceedings of IEEE Globecom Workshops

(GC Wkshps), pp. 1–6, 4-8 Dec. 2017 (Singapore).

[53] Q. Yuan, H. Zhou, J. Li, Z. Liu, F. Yang, and X. S. Shen, “Toward efficient content delivery

for automated driving services: An edge computing solution,” IEEE Network, vol. 32, no. 1,

pp. 80–86, 26 Jan. 2018.

[54] S. Baadel, F. Thabtah, and J. Lu, “Overlapping clustering: A review,” in Proceedings of IEEE

Computing Conference, pp. 233–237, 3-15 Jul. 2016 (London, UK).

[55] R. Kune, P. K. Konugurthi, A. Agarwal, R. R. Chillarige, and R. Buyya, “The anatomy of

big data computing,” Software: Practice and Experience, vol. 46, no. 1, pp. 79–105, Jan. 2016.

[56] T. Nguyen and M. Vojnovic, “Weighted proportional allocation,” in Proceedings of the

ACM Joint International Conference on Measurement and Modeling of Computer Sys-

tems(SIGMETRICS), pp. 173–184, 07 - 11 Jun. 2011 (San Jose, CA, USA).

[57] S. Mosleh, L. Liu, and J. Zhang, “Proportional-fair resource allocation for coordinated multi-

point transmission in lte-advanced,” IEEE Transactions on Wireless Communications, vol. 15,

no. 8, pp. 5355–5367, 21 Apr. 2016.

[58] L. Lei, D. Yuan, C. K. Ho, and S. Sun, “Joint optimization of power and channel allocation

with non-orthogonal multiple access for 5G cellular systems,” in Proceedings of IEEE Global

Communications Conference (GLOBECOM), pp. 1–6, 6-10 Dec. 2015 (San Diego, CA, USA).

[59] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing:

The communication perspective,” IEEE Communications Surveys and Tutorials, vol. 19, no. 4,

pp. 2322– 2358, 25 Aug. 2017.

[60] C. B. Networks, “Backhauling x2,” http://cbnl.com/resources/

backhauling-x2, [Online; accessed 3 Feb. 2019].

Page 118: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 105

[61] M. S. Elbamby, M. Bennis, and W. Saad, “Proactive edge computing in latency-constrained

fog networks,” in Proceedings of IEEE European Conference on Networks and Communica-

tions (EuCNC), pp. 1–6, 12-15 Jun. 2017 (Oulu, Finland).

[62] D. Lee, J. Choi, J.-H. Kim, S. H. Noh, S. L. Min, Y. Cho, and C. S. Kim, “Lrfu: A spec-

trum of policies that subsumes the least recently used and least frequently used policies,” IEEE

Transactions on Computers, vol. 50, no. 12, pp. 1352–1361, Dec. 2001.

[63] A. Ndikumana, S. Ullah, and C. S. Hong, “Scalable aggregation-based packet forwarding in

content centric networking,” in Proceedings of the 18th IEEE Asia-Pacific Network Operations

and Management Symposium (APNOMS), pp. 1–4, 5-7 Oct. 2016 (Kanazawa, Japan).

[64] A. Ndikumana, N. H. Tran, T. M. Ho, D. Niyato, Z. Han, and C. S. Hong, “Joint incentive

mechanism for paid content caching and price based cache replacement policy in named data

networking,” IEEE Access, vol. 6, pp. 33 702–33 717, 18 Jun. 2018.

[65] M. Hong, M. Razaviyayn, Z.-Q. Luo, and J.-S. Pang, “A unified algorithmic framework for

block-structured optimization involving big data: With applications in machine learning and

signal processing,” IEEE Signal Processing Magazine, vol. 33, no. 1, pp. 57–77, 25 Dec. 2015.

[66] M. Hong, T.-H. Chang, X. Wang, M. Razaviyayn, S. Ma, and Z.-Q. Luo, “A block successive

upper bound minimization method of multipliers for linearly constrained convex optimization,”

arXiv preprint arXiv:1401.7079, 28 Jan. 2014.

[67] A. Ndikumana, N. H. Tran, and C. S. Hong, “Deep learning based caching for self-driving

car in multi-access edge computing,” arXiv preprint arXiv:1810.01548, 3 Oct. 2018.

[68] M. Hong, X. Wang, M. Razaviyayn, and Z.-Q. Luo, “Iteration complexity analysis of block

coordinate descent methods,” Mathematical Programming, vol. 163, no. 1-2, pp. 85–114, May

2017.

[69] U. Feige, M. Feldman, and I. Talgam-Cohen, “Oblivious rounding and the integrality gap,”

in Proceedings of the Leibniz International Proceedings in Informatics, vol. 60. Schloss

Dagstuhl-Leibniz-Zentrum fuer Informatik, 13-16 Dec. 2016.

Page 119: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 106

[70] N. Zhang, Y.-F. Liu, H. Farmanbar, T.-H. Chang, M. Hong, and Z.-Q. Luo, “Network slicing

for service-oriented networks under resource constraints,” IEEE Journal on Selected Areas in

Communications, vol. 35, no. 11, pp. 2512–2521, 05 Oct. 2017.

[71] D. K. Molzahn, F. Dorfler, H. Sandberg, S. H. Low, S. Chakrabarti, R. Baldick, and J. Lavaei,

“A survey of distributed optimization and control algorithms for electric power systems,” IEEE

Transactions on Smart Grid, vol. 8, no. 6, pp. 2941–2962, 25 Jul. 2017.

[72] M. Farivar, X. Zho, and L. Che, “Local voltage control in distribution systems: An incre-

mental control algorithm,” in Proceedings of International Conference on Smart Grid Commu-

nications (SmartGridComm). IEEE, pp. 732–737, 2-5 Nov. 2015 (Miami, FL, USA).

[73] G. Van Rossum et al., “Python programming language.” in Proceedings of USENIX Annual

Technical Conference, vol. 41, p. 36, 17-22 Jun. 2007 (Santa Clara, CA, USA).

[74] W. McKinney, “pandas: a foundational python library for data analysis and statistics,” Python

for High Performance and Scientific Computing, pp. 1–9, 2011.

[75] O. Boswarva et al., “Sitefinder mobile phone base station database,” Edinburgh DataShare,

Feb. 2017.

[76] T. M. Kodinariya and P. R. Makwana, “Review on determining number of cluster in k-means

clustering,” International Journal, vol. 1, no. 6, pp. 90–95, 2013.

[77] Y. Mao, J. Zhang, S. Song, and K. B. Letaief, “Stochastic joint radio and computational

resource management for multi-user mobile-edge computing systems,” IEEE Transactions on

Wireless Communications, vol. 16, no. 9, pp. 5994–6009, 23 Jun. 2017.

[78] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-

edge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795–2808,

26 Oct. 2015.

[79] M. E. Newman, “Power laws, pareto distributions and zipf’s law,” Contemporary Physics,

vol. 46, no. 5, pp. 323–351, 20 Feb. 2007.

Page 120: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 107

[80] A. Ndikumana, K. Thar, T. M. Ho, N. H. Tran, P. L. Vo, D. Niyato, and C. S. Hong, “In-

network caching for paid contents in content centric networking,” in Proceedings of IEEE

Global Communications Conference (GLOBECOM), pp. 1–6, 4-8 Dec. 2017 (Singapore).

[81] A. Ndikumana, S. Ullah, K. Thar, N. H. Tran, B. J. Park, and C. S. Hong, “Novel coopera-

tive and fully-distributed congestion control mechanism for content centric networking,” IEEE

Access, vol. 5, pp. 27 691–27 706, 29 Nov. 2017.

[82] A. Ndikumana, S. Ullah, R. Kamal, K. Thar, H. S. Kang, S. I. Moon, and C. S. Hong,

“Network-assisted congestion control for information centric networking,” in Proceedings of

17th IEEE Asia-Pacific Network Operations and Management Symposium (APNOMS), pp.

464–467, 19-21 Aug. 2015 (Busan, South Korea).

[83] A. Themelis, L. Stella, and P. Patrinos, “Douglas-rachford splitting and ADMM for non-

convex optimization: new convergence results and accelerated versions,” arXiv preprint

arXiv:1709.05747v2, 9 Jan. 2018.

[84] M. Daily, S. Medasani, R. Behringer, and M. Trivedi, “Self-driving cars,” Computer, vol. 50,

no. 12, pp. 18–23, 2017.

[85] BUSBUD, “Will driverless buses be a reality?” https://www.busbud.com/blog/

will-driverless-buses-reality/, [Online; accessed 22 Mar. 2019].

[86] Frost Sullivan, “Global autonomous driving market outlook, 2018 (frost sullivan reports,

march 2018),” https://info.microsoft.com/rs/157-GQE-382/images/

K24A-2018%20Frost%20%26%20Sullivan%20-%20Global%20Autonomous%

20Driving%20Outlook.pdf, [Online; accessed 22 Mar. 2019].

[87] G. Jarvis, “Keeping entertained in the autonomous vehicle,” TU-Automotive Detroit, 6-7

Jun. 2018.

[88] Hollywood Reporter, “Why Hollywood could make billions from self-

driving cars,” https://www.hollywoodreporter.com/behind-screen/

why-driving-cars-could-be-hollywoods-next-big-thing-1031554,

[Online; accessed 22 Mar. 2019].

Page 121: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 108

[89] Next Analytics, “YouTube video appeal demographics,” https://www.

nextanalytics.com/excel-youtube-analytic-insights-and-data-mining/

page/4/, [Online; accessed 22 Mar. 2019].

[90] Dimitrakopoulos, George and Bravos, George, “Current Technologies in Vehicular Commu-

nication,” in Springer, 2017.

[91] A. Azzouni and G. Pujolle, “NeuTM: A neural network-based framework for traffic ma-

trix prediction in SDN,” in Proceedings of IEEE/IFIP Network Operations and Management

Symposium(NOMS),, pp. 1–5, 23-27 Apr. 2018 (Taipei, Taiwan).

[92] J. J. Whang, I. S. Dhillon, and D. F. Gleich, “Non-exhaustive, overlapping k-means,” in

Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, pp. 936–

94430, Apr.-2 May. 2015 (British Columbia, Canada).

[93] J. Martineau, T. Finin, A. Joshi, and S. Patel, “Improving binary classification on text prob-

lems using differential word features,” in Proceedings of the 18th ACM conference on Informa-

tion and knowledge management, pp. 2019–2024, 02 - 06 Nov. 2009 (Hong Kong, China).

[94] Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal process-

ing, communications, and machine learning,” IEEE Transactions on Signal Processing, vol. 65,

no. 3, pp. 794–816, 18 Aug. 2017.

[95] Z. Chang, L. Lei, Z. Zhou, S. Mao, and T. Ristaniemi, “Learn to cache: Machine learning

for network edge caching in the big data era,” IEEE Wireless Communications, vol. 25, no. 3, 4

Jul. 2018.

[96] R. Devooght and H. Bersini, “Long and short-term recommendations with recurrent neural

networks,” in Proceedings of the 25th Conference on User Modeling, Adaptation and Person-

alization. ACM, pp. 13–21, 9-12 Jul. 2017 (FIIT STU, Bratislava, Slovakia).

[97] E. Erdem, “Predicting movie rating based on tags using machine learning and deep

learning,” https://github.com/AdvRegProj/MovieLens-ML-LSTM/blob/

master/Movie%20Rating%20Prediction%20using%20GloVe%20Word%

Page 122: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 109

20Embeddings%20and%20Deep%20Learning.ipynb, [Online; accessed 22 Mar.

2019].

[98] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, M. Hasan, B. C. Van Esesn, A. A. S.

Awwal, and V. K. Asari, “The history began from alexnet: A comprehensive survey on deep

learning approaches,” arXiv preprint:1803.01164, 3 Mar. 2018.

[99] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image

recognition,” arXiv preprint:1409.1556, 4 Sep. 2014.

[100] G. Levi and T. Hassner, “Age and gender classification using convolutional neural net-

works,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Workshops, pp. 34–42, 7-12 Jun. 2015 (Boston, MA, USA).

[101] R. Rothe, R. Timofte, and L. V. Gool, “Dex: Deep expectation of apparent age from a sin-

gle image,” in Proceedings of IEEE International Conference on Computer Vision Workshops

(ICCVW), 7-13 Dec. 2015 (Santiago, Chile).

[102] V. Shaw and S. Dowlatkhah, “Network edge based access network discovery and selection,”

US Patent 9,629,076, 18 Apr. 2017.

[103] E. Ndashimye, N. I. Sarkar, and S. K. Ray, “A novel network selection mechanism for

vehicle-to-infrastructure communication,” in Proceedings of IEEE 14th Intl. Conf. on Pervasive

Intelligence and Computing, 2nd Intl. Conf. on Big Data Intelligence and Computing and Cyber

Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), pp. 483–488, 8-12

Aug. 2016 (Auckland, New Zealand).

[104] 3GPP, “ Access Network Discovery and Selection Functions (ANDSF) Management

Object (MO) 3GPP TS 24.312 version Release 8,” Jan. 22, 2015. https://portal.

3gpp.org/desktopmodules/Specifications/SpecificationDetails.

aspx?specificationId=1077, [Online; accessed 2 Apr. 2019].

[105] T. Wang, L. Song, and Z. Han, “Coalitional graph games for popular content distribution in

cognitive radio vanets,” IEEE Transactions on Vehicular Technology, vol. 62, no. 8, pp. 4010–

4019, 6 Feb. 2013.

Page 123: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

BIBLIOGRAPHY 110

[106] N. Cheng, N. Lu, N. Zhang, X. Zhang, X. S. Shen, and J. W. Mark, “Opportunistic wifi of-

floading in vehicular environment: A game-theory approach,” IEEE Transactions on Intelligent

Transportation Systems, vol. 17, no. 7, pp. 1944–1955, 26 Jan. 2016.

[107] K. Shah, A. Mitra, and D. Matani, “An o (1) algorithm for implementing the lfu cache

eviction scheme,” Technical report, Citeseer, 2010.

[108] S. Mosleh, L. Liu, and J. Zhang, “Proportional-fair resource allocation for coordinated

multi-point transmission in lte-advanced,” IEEE Transactions on Wireless Communications,

vol. 15, no. 8, pp. 5355–5367, 21 Apr. 2016.

[109] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge com-

puting: The communication perspective,” IEEE Communications Surveys & Tutorials, vol. 19,

no. 4, pp. 2322–2358, 25 Aug. 2017.

[110] Google, “Python client library for google maps api web services,” https://github.

com/googlemaps/google-maps-services-python, [Online; accessed 12 Mar.

2019].

[111] Keras, “Keras: The Python Deep Learning library,” https://keras.io/, [Online; ac-

cessed 22 Mar. 2019].

[112] F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” ACM

transactions on interactive intelligent systems, vol. 5, no. 4, p. 19, 2016.

[113] L. Breslau, P. Cao, L. Fan, G. Phillips, S. Shenker et al., “Web caching and zipf-like dis-

tributions: Evidence and implications,” in Proceedings of IEEE 18th Annual Joint Conference

of the IEEE Computer and Communications Societies, vol. 1, no. 1, pp. 126–134, 21-25 Mar.

1999 (New York, NY, USA).

Page 124: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

Appendix AList of Publications

International Journal Papers:

[1] Anselme Ndikumana, Nguyen H. Tran, Tai Manh Ho, Zhu Han, Walid Saad, Dusit Niyato,

and Choong Seon Hong, “Joint communication, computation, caching, and control in big data

multi-access edge computing, IEEE Transactions on Mobile Computing, (SCI, IF: 4.098) (in

press).

[2] Anselme Ndikumana, Nguyen H. Tran, Tai Manh Ho, Dusit Niyato, Zhu Han and Chonng

Seon Hong, “Joint Incentive Mechanism for Paid Content Caching and Price Based Cache

Replacement Policy in Named Data Networking,” IEEE Access, Vol. 6, pp.33702-33717

(SCIE, IF: 4.098).

[3] Anselme Ndikumana, Saeed Ullah, Kyi Thar, Nguyen H. Tran, Bang Ju Park, Choong Seon

Hong, “Novel Cooperative and Fully-distributed Congestion Control Mechanism for Content

Centric Networking,” IEEE Access, Vol.5, pp. 27691-27706 (SCIE, IF: 4.098).

[4] Anselme Ndikumana, Saeed Ullah, Do Hyeon Kim, and Choong Seon Hong, “DeepAuC:

Joint Deep Learning and Auction for Congestion-Aware Caching in Named Data Network-

ing,” PloS One (SCIE, IF: 2.766) (Major revision).

[5] S. M. Ahsan Kazmi, Tri Nguyen Dang, Ibrar Yaqoob , Anselme Ndikumana, Ejaz Ahmed,

Rasheed Hussain, Choong Seon Hong, “Infotainment Enabled Smart Cars: A Joint Commu-

nication, Caching, and Computation Approach,” IEEE Transactions on Vehicular Technology

(SCI, IF: 4.432) (Major revision).

111

Page 125: Disclaimer - khu.ac.krnetworking.khu.ac.kr/layouts/net/publications/data/phd... · 2020-02-10 · Anselme Ndikumana Department of Computer Science & Engineering Graduate School Kyung

LIST OF PUBLICATIONS 112

International Conference Papers:

[1] Anselme Ndikumana and Choong Seon Hong, “Self-Driving Car Meets Multi-access Edge

Computing for Deep Learning-Based Caching,” The 33rd International Conference on Infor-

mation Networking (ICOIN 2019) January 9-11, 2019, Kuala Lumpur, Malaysia.

[2] Anselme Ndikumana, Kyi Thar, Tai Manh Ho, Nguyen H. Tran, Phuong L. Vo, Dusit Niyato,

Choong Seon Hong, “In-Network Caching for Paid Contents in Content Centric Networking,”

IEEE Global Communications Conference, pp. 1-6, 2017, Dec. 4-8, 2017, Singapore.

[3] Anselme Ndikumana, Saeed Ullah, Tuan LeAnh, Nguyen H. Tran, Choong Seon Hong,

“Collaborative Cache Allocation and Computation Offloading in Mobile Edge Computing,”

The 19th Asia-Pacific Network Operations and Management Symposium (APNOMS 2017),

Sep. 27-29, 2017, Seoul, Korea.

[4] Anselme Ndikumana and Choong Seon Hong, “ProCCN:Proximity-based Content Centric

Networking,” The International Symposium on Perception, Action, and Cognitive Systems

(PACS 2016), Oct. 27-28, 2016, Seoul, Korea.

[5] Anselme Ndikumana, Saeed Ullah, Choong Seon Hong, “Scalable Aggregation-based

Packet Forwarding in Content Centric Networking,” The 18th Asia-Pacific Network Oper-

ations and Management Symposium (APNOMS 2016), Oct. 5-7, 2016, Kanazawa, Japan.

[6] Anselme Ndikumana, Saeed Ullah, Rossi Kamal, Kyi Thar, Hyo Sung Kang, Seungil Moon,

Choong Seon Hong, “Network-Assisted Congestion Control for Information Centric Net-

working (ICN),” The 17th Asia-Pacific Network Operations and Management Symposium

(APNOMS 2015), 19-21 Aug. 2015, Busan, South Korea.

[7] Saeed Ullah, Tuan LeAnh, Anselme Ndikumana, Md. Golam Rabiul Alam, Choong Seon

Hong, “Layered Video Communication in ICN Enabled Cellular Network with D2D Com-

munication,” The 19th Asia-Pacific Network Operations and Management Symposium (AP-

NOMS 2017), Sep. 27-29, 2017, Seoul, Korea.