deep learning for natural language processing › slides › [email protected] · machine...

162

Upload: others

Post on 07-Jun-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Deep Learning for Natural Language Processing

Xipeng [email protected]

http://nlp.fudan.edu.cn

Weibo:@邱锡鹏

Fudan University

Corpus and Empirical Linguistics Workshop

City University of Hong Kong

June 23, 2016

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 1 / 157 CityU, HK

Page 2: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 2 / 157 CityU, HK

Page 3: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 3 / 157 CityU, HK

Page 4: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Begin with �AI�

Human: Memory, Computation

Computer: Learning, Thinking, Creativity

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 4 / 157 CityU, HK

Page 5: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Turing Test

�Computing Machinery and Intelligence�, Alan Turing, 1950.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 5 / 157 CityU, HK

Page 6: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Arti�cial Intelligence

De�nition from Wikipedia

Arti�cial intelligence (AI) is the intelligence exhibited by machines.Colloquially, the term �arti�cial intelligence� is likely to be applied when amachine uses cutting-edge techniques to competently perform or mimic�cognitive� functions that we intuitively associate with human minds, suchas �learning� and �problem solving�.

Research Topics

Knowledge RepresentationMachine LearningNatural Language ProcessingComputer Vision· · ·

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 6 / 157 CityU, HK

Page 7: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Natural Language Processing (NLP)

What is NLP?

Natural language processing turns to engineering to solve problems thatneed to analyze (or generate) natural language text.

Data DrivenAccuracy DrivenMore Engineered

V.S. Computational Linguistics

Computational methods to answer the scienti�c questions oflinguistics.More theoretic

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 7 / 157 CityU, HK

Page 8: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Challenge: Sematic Gap

Text in Human

床前明月光,疑是地上霜。举头望明月,低头思故乡。

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 8 / 157 CityU, HK

Page 9: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Challenge: Sematic Gap

Text in Computer111001011011101010001010111001011000100110001101111001101001100010001

110111001101001110010001000111001011000010110001001111011111011110010001100

111001111001011010010001111001101001100010101111111001011001110010110

000111001001011100010001010111010011001110010011100111000111000000010000010

0010000000100000001000000000101011100100101110001011111011100101101001

00101101001110011010011100100110111110011010011000100011101110011010011100

1000100011101111101111001000110000001010111001001011110110001110111001

0110100100101101001110011010000000100111011110011010010101100001011110010010

11100110100001111000111000000010000010

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 9 / 157 CityU, HK

Page 10: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Challenge: Sematic Gap

Figure: Guernica (Picasso)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 10 / 157 CityU, HK

Page 11: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

The idea pipeline of NLP

Word Segmentation

POS Tagging

Syntactic Parsing

Semantic Parsing Knowledge

Applications:Question AnsweringMachine TranslationSentiment Analysis

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 11 / 157 CityU, HK

Page 12: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

But in practice: End-to-End

Model

I like this movie.

I dislike this movie.

Model Selection

Feature Extraction Parameter Learning

Decoding/Inference

+

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 12 / 157 CityU, HK

Page 13: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Feature Extraction

Bag-of-Word

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 13 / 157 CityU, HK

Page 14: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Example: Sentiment Classi�cation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 14 / 157 CityU, HK

Page 15: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Example: Chinese Word Segmentation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 15 / 157 CityU, HK

Page 16: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Arti�cial Intelligence

Example: Chinese Word Segmentation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 16 / 157 CityU, HK

Page 17: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 17 / 157 CityU, HK

Page 18: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Machine Learning

De�nition from Wikipedia

Machine learning explores the study and construction of algorithms thatcan learn from and make predictions on data.

ModelInput: x Output: y

Learning AlgorithmTraining Data: (x , y)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 18 / 157 CityU, HK

Page 19: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Formal Speci�cation of Machine Learning

Input Data: (xi , yi ),1 ≤ i ≤ mModel:

Linear Model: y = f (x) = wT x + bGeneralized Linear Model: y = f (x) = wTφ(x) + bNon-linear Model: Neural Network

Criterion:

Loss Function:L(y , f (x))→ OptimizationEmpirical Risk:Q(θ) = 1

m ·∑m

i=1L(yi , f (xi , θ))→ Minimization

Regularization: ‖θ‖2

Objective Function: Q(θ) + λ ‖θ‖2

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 19 / 157 CityU, HK

Page 20: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Loss Function

Given an test sample (x , y), the predicted label is f (x , θ)0-1 Loss

L(y , f (x , θ)) =

{0 if y = f (x , θ)1 if y 6= f (x , θ)

(1)

= I (y 6= f (x , θ)), (2)

here I is indicator function.Quadratic Loss

L(y , y) = (y − f (x , θ))2 (3)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 20 / 157 CityU, HK

Page 21: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Loss Function

Cross-entropy Loss We regard fi (x , θ) as the conditional probability of classi .

fi (x , θ) ∈ [0, 1],C∑i=1

fi (x , θ) = 1 (4)

fy (x , θ) is the likelihood function of y . Negative Log Likelihood function is

L(y , f (x , θ)) = − log fy (x , θ). (5)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 21 / 157 CityU, HK

Page 22: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Loss Function

We use one-hot vector y to represent class c in which yc = 1 and otherelements are 0.Negative Log Likelihood function can be rewritten as

L(y , f (x , θ)) = −C∑i=1

yi log fi (x , θ). (6)

yi is distribution of gold labels. Thus, Eq 6 is Cross Entropy Loss function.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 22 / 157 CityU, HK

Page 23: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Loss Function

Hinge Loss For binary classi�cation, y and f (x , θ) are in {−1,+1}. HingeLoss is

L(y , f (x , θ)) = max (0, 1− yf (x , θ)) (7)

= |1− yf (x , θ)|+. (8)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 23 / 157 CityU, HK

Page 24: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Loss Function

For binary classi�cation, y and f (x , θ) are in {−1,+1}.z = yf (x , θ).

http://www.cs.cmu.edu/ yandongl/loss.html

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 24 / 157 CityU, HK

Page 25: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Parameter Learning

In ML, our objective is to learn the parameter θ to minimize the lossfunction.

θ∗ = argminθR(θt) (9)

= argminθ

1

N

N∑i=1

L(y (i), f (x (i), θ)

). (10)

How to learn θ?

Gradient Descent!

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 25 / 157 CityU, HK

Page 26: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Parameter Learning

In ML, our objective is to learn the parameter θ to minimize the lossfunction.

θ∗ = argminθR(θt) (9)

= argminθ

1

N

N∑i=1

L(y (i), f (x (i), θ)

). (10)

How to learn θ?Gradient Descent!

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 25 / 157 CityU, HK

Page 27: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Gradient Descent

Gradient Descent:

at+1 = at − λ∂R(θ)∂θt

(11)

= at − λN∑i=1

∂R(θt ; x

(i), y (i))

∂θ, (12)

λ is also called Learning Rate in ML.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 26 / 157 CityU, HK

Page 28: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Gradient Descent

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 27 / 157 CityU, HK

Page 29: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Gradient Descent

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 28 / 157 CityU, HK

Page 30: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Stochastic Gradient Descent (SGD)

It is di�cult to handle large scale training data with (batch) GradientDescent.

We can use one (randomly picked) example in each iteration, calledStochastic Gradient Descent.

at+1 = at − λ∂R(θt ; x

(t), y (t))

∂θ, (13)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 29 / 157 CityU, HK

Page 31: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Stochastic Gradient Descent (SGD)

It is di�cult to handle large scale training data with (batch) GradientDescent.We can use one (randomly picked) example in each iteration, calledStochastic Gradient Descent.

at+1 = at − λ∂R(θt ; x

(t), y (t))

∂θ, (13)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 29 / 157 CityU, HK

Page 32: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Mini-batch Stochastic Gradient Descent

A tradeo� between batch and stochastic gradient descent: Mini-batchSGD.

We can use m (1 ≤ m ≤ N) examples in each iteration.

at+1 = at − λ1

m

∑i∈It

∂R(θt ; x

(i), y (i))

∂θ, (14)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 30 / 157 CityU, HK

Page 33: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Mini-batch Stochastic Gradient Descent

A tradeo� between batch and stochastic gradient descent: Mini-batchSGD.We can use m (1 ≤ m ≤ N) examples in each iteration.

at+1 = at − λ1

m

∑i∈It

∂R(θt ; x

(i), y (i))

∂θ, (14)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 30 / 157 CityU, HK

Page 34: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Two Tricks of SGD

Early-StopShu�e

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 31 / 157 CityU, HK

Page 35: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Linear Classi�cation

For binary classi�cation y ∈ {0, 1}, the classi�er is

y =

{1 if wTx > 00 if wTx ≤ 0

= I (wTx > 0), (15)

x1

x2

wT x

=0

w1‖w‖

w

Figure: Binary Linear Classi�cation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 32 / 157 CityU, HK

Page 36: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Logistic Regression

How to learn the parameter w: Perceptron, Logistic Regression, etc.The posterior probability of y = 1 is

P(y = 1|x) = σ(wTx) =1

1+ exp(−wTx), (16)

where, σ(·) is logistic function.The posterior probability of y = 0 is P(y = 0|x) = 1− P(y = 1|x).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 33 / 157 CityU, HK

Page 37: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Logistic Regression

Given n samples (x (i), y (i)), 1 ≤ i ≤ N, we use the cross-entropy lossfunction.

J (w) = −N∑i=1

(y (i) log

(σ(wTx(i))

)+ (1− y (i)) log

(1− σ(wTx(i))

))(17)

The gradient of J (w) is

∂J (w)∂w

=N∑i=1

(x(i) ·(σ(wTx(i))− y (i)

))(18)

Initialize w0 = 0, and update

wt+1 = wt + λ∂J (w)∂w

, (19)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 34 / 157 CityU, HK

Page 38: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Multiclass Classi�cation

Generally, y = {1, · · · ,C}, we de�ne C discriminant functions

fc(x) = wTc x, c = 1, · · · ,C , (20)

where wc is weight vector of class c .Thus,

y =C

argmaxc=1

wTc x (21)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 35 / 157 CityU, HK

Page 39: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Softmax Regression

Softmax regression is a generalization of logistic regression to multi-classclassi�cation problems.With softmax, the posterior probability of y = c is

P(y = c |x) = softmax(wTc x) =exp(w>c x)∑Ci=1 exp(w

>i x)

. (22)

To represent class c by one-hot vector

y = [I (1 = c), I (2 = c), · · · , I (C = c)]T, (23)

where I () is indictor function.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 36 / 157 CityU, HK

Page 40: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Softmax Regression

Rede�ne Eq 22,

y = softmax(WTx)

=exp(WTx)

1T exp ((WTx))

=exp(z)

1T exp (z)

, (24)

where,W = [w1, · · · ,wC ],y is predicted posterior probability,z = WTx is input of softmax function.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 37 / 157 CityU, HK

Page 41: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Machine Learning

Softmax Regression

Given training set (x(i), y(i)), 1 ≤ i ≤ N, the cross-entropy loss is

J (W ) = −N∑i=1

C∑c=1

y(i)c log y

(i)c = −

N∑i=1

(y(i))T log y(i)

The gradient of J (W ) is

∂J (W )

∂wc= −

N∑i=1

x(i)(y(i) − y(i)

)c

(25)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 38 / 157 CityU, HK

Page 42: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 39 / 157 CityU, HK

Page 43: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Arti�cial Neural Network

Arti�cial neural networks1 (ANNs) are a family of models inspired bybiological neural networks (the central nervous systems of animals, inparticular the brain).Arti�cial neural networks are generally presented as systems ofinterconnected �neurons� which exchange messages between each other.

1https://en.wikipedia.org/wiki/Arti�cial_neural_network

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 40 / 157 CityU, HK

Page 44: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Biological Neuron

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 41 / 157 CityU, HK

Page 45: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Arti�cial Neuron

An arti�cial neuron is a mathematical function to simulate biologicalneurons. Input: x = (x1, x2, · · · , xn)State: zOutput: a z = w>x+ b (26)

a = f (z) (27)

σ 0/1∑

x2w2

.

.

.

xn

wn

x1

w1

1

b

Input

weight

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 42 / 157 CityU, HK

Page 46: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Activation Functions

Sigmoid Function:

σ(x) =1

1+ e−x(28)

tanh(x) =ex − e−x

ex + e−x(29)

tanh(x) = 2σ(2x)− 1

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 43 / 157 CityU, HK

Page 47: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Activation Functions

recti�er function2, also called recti�ed linear unit (ReLU)3.

recti�er(x) = max(0, x) (30)

softplus function4:softplus(x) = log(1+ ex) (31)

2 X. Glorot, A. Bordes, and Y. Bengio. �Deep sparse recti�er neural networks�. In:International Conference on Arti�cial Intelligence and Statistics. 2011, pp. 315�323.

3 V. Nair and G. E. Hinton. �Recti�ed linear units improve restricted boltzmann machines�. In:Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010, pp. 807�814.

4 C. Dugas et al. �Incorporating second-order functional knowledge for better option pricing�. In:Advances in Neural Information Processing Systems (2001).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 44 / 157 CityU, HK

Page 48: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Activation Functions

−4 −2 0 2 4 6

0.2

0.4

0.6

0.8

1

(a) logistic

−4 −2 0 2 4 6−1

−0.5

0

0.5

1

(b) tanh

−4 −2 0 2 4 60

1

2

3

4

5

(c) recti�er

−4 −2 0 2 4 6

1

2

3

4

5

(d) softplus

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 45 / 157 CityU, HK

Page 49: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Types of Arti�cial Neural Network5

Feedforward neural network, also called Multilayer Perceptron (MLP).Recurrent neural network.

5https://en.wikipedia.org/wiki/Types_of_arti�cial_neural_networks

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 46 / 157 CityU, HK

Page 50: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Basic Concepts of Deep Learning

Model: Arti�cial neural networks that consist of multiple hiddennon-linear layers.Function: Non-linear function y = σ(

∑i wixi + b).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 47 / 157 CityU, HK

Page 51: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Feedforward Neural Network

In feedforward neural network, the information moves in only one directionforward: From the input nodes data goes through the hidden nodes (if any)and to the output nodes.There are no cycles or loops in the network.

x1

x2

x3

x4

y

Hidden HiddenInput Output

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 48 / 157 CityU, HK

Page 52: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Feedforward Computing

De�nitions:

L Number of Layers;nl Number of neurons in l-th layer;fl(·) Activation function in l-th layer;

W (l) ∈ Rnl×nl−1 weight matrix between l − 1-th layer and l-th layer;b(l) ∈ Rnl bias vector between l − 1-th layer and l-th layer;z(l) ∈ Rnl state vector of neurons in l-th layer;a(l) ∈ Rnl activation vector of neurons in l-th layer.

z(l) = W (l) · a(l−1) + b(l) (32)

a(l) = fl(z(l)) (33)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 49 / 157 CityU, HK

Page 53: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Feedforward Computing

z(l) = W (l) · fl(z(l−1)) + b(l) (34)

Thus,

x = a(0) → z(1) → a(1) → z(2) → · · · → a(L−1) → z(L) → a(L) = y (35)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 50 / 157 CityU, HK

Page 54: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Feedforward Computing

5Image Source: https://www.quora.com/What-is-the-best-visual-explanation-for-the-back-propagation-algorithm-for-neural-networks

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 51 / 157 CityU, HK

Page 55: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Combining feedforward network and Machine Learning

Given training samples (x(i), y (i)), 1 ≤ i ≤ N, and feedforward networkf (x|w,b), the objective function is

J(W ,b) =N∑i=1

L(y (i), f (x(i)|W ,b)) +1

2λ‖W ‖2F , (36)

=N∑i=1

J(W ,b; x(i), y (i)) +1

2λ‖W ‖2F , (37)

where ‖W ‖2F =∑L

l=1

∑nl+1

j=1

∑nl

j=W(l)ij .

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 52 / 157 CityU, HK

Page 56: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Learning by GD

W (l) = W (l) − α∂J(W ,b)

∂W (l), (38)

= W (l) − αN∑i=1

(∂J(W ,b; x(i), y (i))

∂W (l))− λW , (39)

b(l) = b(l) − α∂J(W ,b; x(i), y (i))

∂b(l), (40)

= b(l) − αN∑i=1

(∂J(W ,b; x(i), y (i))

∂b(l)), (41)

(42)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 53 / 157 CityU, HK

Page 57: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Backpropogation

How to compute ∂J(W ,b;x,y)

∂W (l) ?

∂J(W ,b; x, y)

∂W(l)ij

=

(∂J(W ,b; x, y)

∂z(l)

)> ∂z(l)

∂W(l)ij

. (43)

We de�ne δ(l) = ∂J(W ,b;x,y)

∂z(l)∈ Rn(l) .

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 54 / 157 CityU, HK

Page 58: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Backpropogation

Because z(l) = W (l) · a(l−1) + b(l),

∂z(l)

∂W(l)ij

=∂(W (l) · a(l−1) + b(l))

∂W(l)ij

=

0...

a(l−1)j...0

. (44)← i-th row

Therefore,

∂J(W ,b; x, y)

∂W(l)ij

= δ(l)i a

(l−1)j (45)

(46)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 55 / 157 CityU, HK

Page 59: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Backpropogation

∂J(W ,b; x, y)

∂W (l)= δ(l)(a(l−1))>. (47)

In the same way,

∂J(W ,b; x, y)

∂b(l)= δ(l). (48)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 56 / 157 CityU, HK

Page 60: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

How to compute δ(l)?

δ(l) ,∂J(W ,b; x, y)

∂z(l)(49)

=∂a(l)

∂z(l)· ∂z

(l+1)

∂a(l)· ∂J(W ,b; x, y)

∂z(l+1)(50)

= diag(f ′l (z(l))) · (W (l+1))> · δ(l+1) (51)

= f ′l (z(l))� ((W (l+1))>δ(l+1)), (52)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 57 / 157 CityU, HK

Page 61: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Backpropogation Error

5Image Source: https://www.quora.com/What-is-the-best-visual-explanation-for-the-back-propagation-algorithm-for-neural-networks

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 58 / 157 CityU, HK

Page 62: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Backpropogation Algorithm

Input : Training Set: (x(i), y (i)), i = 1, · · · ,N, Iteration: TOutput: W , b

1 Initialize W , b ;2 for t = 1 · · ·T do

3 for i = 1 · · ·N do

4 (1) Feedforward Computing;

5 (2) Compute δ(l) by Eq. 52;6 (3) Compute gradient of parameters by Eq. 47 and 48;

7∂J(W ,b;x(i),y (i))

∂W (l) = δ(l)(a(l−1))>;

8∂J(W ,b;x(i),y (i))

∂b(l)= δ(l);

9 (4) Update Parameters;

10 W (l) = W (l) − α∑N

i=1(∂J(W ,b;x(i),y (i))

∂W (l) )− λW ;

11 b(l) = b(l) − α∑N

i=1(∂J(W ,b;x(i),y (i))

∂b(l));

12 end

13 end

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 59 / 157 CityU, HK

Page 63: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Deep Learning

Gradient Vanishing

δ(l) = f ′l (z(l))� ((W (l+1))>δ(l+1), (53)

When we use sigmoid function, such as logistic σ(x) and tanh,

σ′(x) = σ(x)(1− σ(x)) ∈ [0, 0.25] (54)

tanh′(x) = 1− (tanh(x))2 ∈ [0, 1]. (55)

−4 −2 0 2 4 6

5 · 10−2

0.1

0.15

0.2

0.25

(e) logistic

−4 −2 0 2 4 60

0.2

0.4

0.6

0.8

1

(f) tanh

Figure: GradientXipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 60 / 157 CityU, HK

Page 64: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 61 / 157 CityU, HK

Page 65: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

How to Implement DL?

1 Design the topological structure;2 Compute the gradient by hand;3 Write Code from scratch;4 Deal with data.

It's too ine�cient!Using tools with automatic di�erentiation. Gradient based machine learningalgorithms will bene�t from the automatic di�erentiation capabilities.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 62 / 157 CityU, HK

Page 66: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

How to Implement DL?

1 Design the topological structure;2 Compute the gradient by hand;3 Write Code from scratch;4 Deal with data.

It's too ine�cient!Using tools with automatic di�erentiation. Gradient based machine learningalgorithms will bene�t from the automatic di�erentiation capabilities.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 62 / 157 CityU, HK

Page 67: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

What I actually do.

5Image Source: http://www.marekrei.com/blog/theano-tutorial/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 63 / 157 CityU, HK

Page 68: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

Popular Deep Learning Libraries for NLP

Theano

Theano1 is a Python library that allows you to de�ne, optimize, andevaluate mathematical expressions involving multi-dimensional arrayse�ciently. It can use GPUs and perform e�cient symbolic di�erentiation.

TensorFlow

TensorFlow2 is another �exible architecture originally developed by GoogleBrain Team. TensorFlow allows you to deploy computation to one or moreCPUs or GPUs in a desktop, server, or mobile device with a single API.

1http://www.deeplearning.net/software/theano/2https://www.tensor�ow.org/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 64 / 157 CityU, HK

Page 69: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

Popular Deep Learning Libraries for NLP

Keras (Recommended for non-programmers)

Keras1 is a minimalist, highly modular neural networks library, written inPython and capable of running on top of either TensorFlow or Theano.

allows for easy and fast prototyping (through total modularity,minimalism, and extensibility).supports both convolutional networks and recurrent networks, as wellas combinations of the two.supports arbitrary connectivity schemes (including multi-input andmulti-output training).runs seamlessly on CPU and GPU.

1http://keras.io/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 65 / 157 CityU, HK

Page 70: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

Getting started: 30 seconds to Keras

from keras.models import Sequential

from keras.layers import Dense , Activation

from keras.optimizers import SGD

model = Sequential ()

model.add(Dense(output_dim =64, input_dim =100))

model.add(Activation("relu"))

model.add(Dense(output_dim =10))

model.add(Activation("softmax"))

model.compile(loss='categorical_crossentropy ',

optimizer='sgd', metrics =['accuracy '])

model.fit(X_train , Y_train , nb_epoch=5, batch_size =32)

loss = model.evaluate(X_test , Y_test , batch_size =32)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 66 / 157 CityU, HK

Page 71: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

Di�culties

Huge ParametersNon-convex OptimizationGradient VanishingPoor Interpretability

Requirments

High ComputationBig DataGood Algorithms

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 67 / 157 CityU, HK

Page 72: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Basic Concepts Implementation

Tricks and Skills6 7

Use ReLU non-linearitiesUse cross-entropy loss for classi�cationSGD + mini-batch

Shu�e the training samples ( ←− very important)Early-Stop

Normalize the input variables (zero mean, unit variance)Schedule to decrease the learning rateUse a bit of L1 or L2 regularization on the weights (or a combination)Use �dropout� for regularizationData Argument

6 G. B. Orr and K.-R. M ller. Neural networks: tricks of the trade. Springer, 2003.7Geo� Hinton, Yoshua Bengio & Yann LeCun, Deep Learning, NIPS 2015 Tutorial.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 68 / 157 CityU, HK

Page 73: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 69 / 157 CityU, HK

Page 74: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

How to use neural network for the NLP tasks?

How to encode or learn the representations of word, phrase, sentence,paragraph, or even document?

Distributed Representation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 70 / 157 CityU, HK

Page 75: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

How to use neural network for the NLP tasks?

How to encode or learn the representations of word, phrase, sentence,paragraph, or even document?

Distributed Representation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 70 / 157 CityU, HK

Page 76: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Representation of Text

Local Representation

One-hot Representation (Word)N-gram (Phrase)BOW (Document)

Non-local Representation

Distributional Representation (Co-occurrent matrix, Latent SemanticAnalysis)Latent Dirichlet AllocationDistributed Representation

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 71 / 157 CityU, HK

Page 77: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Representation of Text

Local Representation

A = 1 0 0 0B = 0 1 0 0C = 0 0 1 0D = 0 0 0 1

Distributed Representation

A = 0.5 0.3 0.0 0.5B = -0.1 -0.2 0.0 0.1C = -0.1 0.5 0.0 -0.5D = 0.0 0.0 0.5 0.1

Advantages: computable, condensed

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 72 / 157 CityU, HK

Page 78: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

General Neural Architectures for NLP8

1 represent the words/features withdense vectors (embeddings) bylookup table;

2 concatenate the vectors;

3 multi-layer neural networks.

classi�cationmatchingranking

8 R. Collobert et al. �Natural language processing (almost) from scratch�. In:The Journal of Machine Learning Research (2011).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 73 / 157 CityU, HK

Page 79: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Di�erence with the traditional methods

Traditional methods Neural methods

Discrete Vector Dense VectorFeatures (One-hot Representation) (Distributed Representation)

High-dimension Low-dimension

Classi�er Linear Non-Linear

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 74 / 157 CityU, HK

Page 80: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Representation Learning for NLP

Word Level

NNLMC&WCBOW & Skip-Gram

Sentence Level

NBOWSequence Models: Recurrent NN (LSTM/GRU), Paragraph VectorTopoligical Models: Recursive NN,Convolutional Models: Convolutional NN

Document Level

NBOWHierachical Models two-level CNNSequence Models LSTM, Paragraph Vector

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 75 / 157 CityU, HK

Page 81: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Let's start with Language Model

A statistical language model is a probability distribution over sequencesof words.A sequence W consists of L words.

P(W ) = P(w1:L) = P(w1, · · · ,wL)

= P(w1)P(w2|w1)P(w3|w1w2) · · ·P(wL|w1:(L−1))

=L∏

i=1

P(wi |w1:(i−1)). (56)

n-gram model:

P(W ) =L∏

i=1

P(wi |w(i−n+1):(i−1)). (57)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 76 / 157 CityU, HK

Page 82: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Neural Probabilistic Language Model9

turn unsupervised learning into supervised learning;avoid the data sparsity of n-gram model;project each word into a low dimensional space.

9 Y. Bengio, R. Ducharme, and P. Vincent. �A Neural probabilistic language model�. In:Journal of Machine Learning Research (2003).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 77 / 157 CityU, HK

Page 83: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Problem of Very Large Vocabulary

Softmax output:

Phθ =

exp(sθ(w , h))∑w ′ exp(sθ(w

′, h)), (58)

Unfortunately both evaluating Phθ and computing the corresponding

likelihood gradient requires normalizing over the entire vocabulary

Hierarchical Softmax: a tree-structured vocabulary10

Negative Sampling11, noise-contrastive estimation (NCE)12

10 A. Mnih and G. Hinton. �A scalable hierarchical distributed language model�. In:Advances in neural information processing systems (2009); F. Morin and Y. Bengio. �Hierarchical ProbabilisticNeural Network Language Model.� In: Aistats. Vol. 5. Citeseer. 2005, pp. 246�252.11 T. Mikolov et al. �E�cient estimation of word representations in vector space�. In:

arXiv preprint arXiv:1301.3781 (2013).12 A. Mnih and K. Kavukcuoglu. �Learning word embeddings e�ciently with noise-contrastive estimation�. In:

Advances in Neural Information Processing Systems. 2013, pp. 2265�2273.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 78 / 157 CityU, HK

Page 84: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Linguistic Regularities of Word Embeddings13

13 T. Mikolov, W.-t. Yih, and G. Zweig. �Linguistic Regularities in Continuous Space Word Representations.� In:HLT-NAACL. 2013, pp. 746�751.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 79 / 157 CityU, HK

Page 85: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Skip-Gram Model14

14 Mikolov et al., �E�cient estimation of word representations in vector space�.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 80 / 157 CityU, HK

Page 86: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Skip-Gram Model

Given a pair of words (w , c), the probability that the word c is observed inthe context of the target word w is given by

Pr(D = 1|w , c) = 1

1+ exp(−wTc),

where w and c are embedding vectors of w and c respectively.The probability of not observing word c in the context of w is given by,

Pr(D = 0|w , c) = 1− 1

1+ exp(−wTc).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 81 / 157 CityU, HK

Page 87: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

Skip-Gram Model with Negative Sampling

Given a training set D, the word embeddings are learned by maximizing thefollowing objective function:

J(θ) =∑

w ,c∈DPr(D = 1|w , c) +

∑w ,c∈D′

Pr(D = 0|w , c),

where the set D′ is randomly sampled negative examples, assuming theyare all incorrect.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 82 / 157 CityU, HK

Page 88: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning General Architecture

E�ciency

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 83 / 157 CityU, HK

Page 89: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 84 / 157 CityU, HK

Page 90: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Convolutional Neural Network

Convolutional neural network (CNN, or ConvNet) is a type offeed-forward arti�cial neural network.Distinguishing features15:

1 Local connectivity2 Shared weights3 Subsampling

15 Y. LeCun et al. �Gradient-based learning applied to document recognition�. In: Proceedings of the IEEE 11(1998).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 85 / 157 CityU, HK

Page 91: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Convolution

Convolution is an integral that expresses the amount of overlap of onefunction as it is shifted over another function.Given an input sequence xt , t = 1, · · · , n and a �lter ft , t = 1, · · · ,m, theconvolution is

yt =n∑

k=1

fk · xt−k+1. (59)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 86 / 157 CityU, HK

Page 92: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

One-dimensional convolution

15Figure from: http://cs231n.github.io/convolutional-networks/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 87 / 157 CityU, HK

Page 93: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Two-dimensional convolution

Given an image xij , 1 ≤ i ≤ M, 1 ≤ j ≤ N and �lter fij , 1 ≤ i ≤ m,1 ≤ j ≤ n, the convolution is

yij =m∑

u=1

n∑v=1

fuv · xi−u+1,j−v+1. (60)

Mean �lter:fuv = 1mn

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 88 / 157 CityU, HK

Page 94: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Convolutional Layer

a(l) = f (w(l) ⊗ a(l−1) + b(l)), (61)

⊗ is convolutional operation.w(l) is shared by all the neurons of l-th layer.Just need m + 1 parameters, and n(l+1) = n(l) −m + 1.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 89 / 157 CityU, HK

Page 95: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Fully Connected Layer V.S. Convolutional Layer

(a) Fully Connected Layer

(b) Convolutional Layer

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 90 / 157 CityU, HK

Page 96: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Pooling Layer

It is common to periodically insert a pooling layer in-between successiveconvolutional layers.

progressively reduce the spatial size of the representationreduce the amount of parameters and computation in the networkavoid over�tting

For a feature map X (l), we divide it into several (non-)overlapped regionsRk , k = 1, · · · ,K . A pooling function down(· · · ) is

X(l+1)k = f (Z

(l+1)k ), (62)

= f(w (l+1) · down(Rk) + b(l+1)

). (63)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 91 / 157 CityU, HK

Page 97: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Pooling Layer

X (l+1) = f (Z (l+1)), (64)

= f(w (l+1) · down(X l) + b(l+1)

), (65)

Two choices of down(·): Maximum Pooling and Average Pooling.

poolmax(Rk) = maxi∈Rk

ai (66)

poolavg (Rk) =1

|Rk |∑i∈Rk

ai . (67)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 92 / 157 CityU, HK

Page 98: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Pooling Layer

15Figure from: http://cs231n.github.io/convolutional-networks/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 93 / 157 CityU, HK

Page 99: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Large Scale Visual Recognition Challenge

2010-2015

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 94 / 157 CityU, HK

Page 100: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

DeepMind's AlphaGo

15http://cs231n.stanford.edu/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 95 / 157 CityU, HK

Page 101: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

DeepMind's AlphaGo

15http://cs231n.stanford.edu/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 96 / 157 CityU, HK

Page 102: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

CNN for Sentence Modeling

Input: A sentence of length n,After Lookup layer, X = [x1, x2, · · · , xn] ∈ Rd×n

variable-length inputconvolutionpooling

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 97 / 157 CityU, HK

Page 103: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

CNN for Sentence Modeling16

Key steps

convolutionzt:t+m−1 =xt ⊕ xt+1 ⊕ xt+m−1 ∈ Rdm

matrix-vector operationxlt = f (W lzt:t+m−1 + bl)Pooling (max over time)xli = maxt x

l−1i ,t

16 Collobert et al., �Natural language processing (almost) from scratch�.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 98 / 157 CityU, HK

Page 104: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

CNN for Sentence Modeling17

Key steps

convolutionzt:t+m−1 = xt ⊕ xt+1 ⊕ xt+m−1 ∈ Rdm

vector-vector operationx lt = f (wlzt:t+m−1 + bl)multiple �lters / multiple channelspooling (max over time)xli = maxt x

l−1i ,t

17 Y. Kim. �Convolutional neural networks for sentence classi�cation�. In: arXiv preprint arXiv:1408.5882(2014).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 99 / 157 CityU, HK

Page 105: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

Dynamic CNN for Sentence Modeling18

Key steps

one-dimensional convolutionxli ,t = f (wl

ixi ,t:t+m−1 + bli )k-max pooling (max over time)

k l = max(ktop, |L−l |L n)(optional) folding: sums every two rowsin a feature map component-wise.multiple �lters / multiple channels

18 N. Kalchbrenner, E. Grefenstette, and P. Blunsom. �A Convolutional Neural Network for Modelling Sentences�.In: Proceedings of ACL. 2014.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 100 / 157 CityU, HK

Page 106: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Convolutional Neural Network

CNN for Sentence Modeling19

Key steps

convolutionzt:t+m−1 = xt ⊕ xt+1 ⊕ xt+m−1 ∈ Rdm

matrix-vector operationxlt = f (Wlzl−1t:t+m−1 + bl)binary max pooling (over time)xli ,t = max(xl−1i ,2t−1, x

l−1i ,2t )

19 B. Hu et al. �Convolutional neural network architectures for matching natural language sentences�. In:Advances in Neural Information Processing Systems. 2014.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 101 / 157 CityU, HK

Page 107: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 102 / 157 CityU, HK

Page 108: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Recurrent Neural Network (RNN)

Output

Hidden Delay

Input

xt

ht

ht−1

ht

The RNN has recurrent hidden stateswhose output at each time isdependent on that of the previoustime. More formally, given a sequencex(1:n) = (x(1), . . . , x(t), . . . , x(n)), theRNN updates its recurrent hiddenstate h(t) by

ht =

{0 t = 0

f (ht−1, xt) otherwise(68)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 103 / 157 CityU, HK

Page 109: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Simple Recurrent Network20

ht = f (Uht−1 +Wxt + b), (69)

where f is non-linear function.

y1 y2 y3 y4 · · · yT

h1 h2 h3 h4 · · · hT

x1 x2 x3 x4 · · · xT

20 J. L. Elman. �Finding structure in time�. In: Cognitive science 2 (1990).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 104 / 157 CityU, HK

Page 110: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Backpropagation Through Time, BPTT

Loss at time t is Jt , the whole loss is J =∑T

t=1 Jt .The gradient of J is

∂J∂U

=T∑t=1

∂Jt∂U

(70)

=T∑t=1

∂ht∂U

∂Jt∂ht

, (71)

J1 J2 J3 J4 · · · JT

h1 h2 h3 h4 · · · hT

x1 x2 x3 x4 · · · xT

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 105 / 157 CityU, HK

Page 111: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Gradient of RNN

∂J∂U

=T∑t=1

t∑k=1

∂hk∂U

∂ht∂hk

∂yt∂ht

∂Jt∂yt

, (72)

∂ht∂hk

=t∏

i=k+1

∂hi∂hi−1

(73)

=t∏

i=k+1

UT diag[f ′(hi−1)]. (74)

∂J∂U

=T∑t=1

t∑k=1

∂hk∂U

(t∏

i=k+1

UT diag(f ′(hi−1))

)∂yt∂ht

∂Jt∂yt

. (75)

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 106 / 157 CityU, HK

Page 112: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Long-Term Dependencies

De�ne γ = ‖UT diag(f ′(hi−1))‖,

Exploding Gradient Problem: When γ > 1 and t − k →∞,γt−k →∞.Vanishing Gradient Problem: When γ < 1 and t − k →∞, γt−k → 0.

There are various ways to solve Long-Term Dependency problem.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 107 / 157 CityU, HK

Page 113: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Long Short Term Memory Neural Network (LSTM)21

The core of the LSTM model is a memory cell c encoding memory at everytime step of what inputs have been observed up to this step.The behavior of the cell is controlled by three �gates�:

input gate ioutput gate oforget gate f

it = σ(Wixt + Uiht−1 + Vict−1), (76)

ft = σ(Wf xt + Uf ht−1 + Vf ct−1), (77)

ot = σ(Woxt + Uoht−1 + Voct), (78)

ct = tanh(Wcxt + Ucht−1), (79)

ct = ft � ct−1 + it � ct , (80)

ht = ot � tanh(ct), (81)

21 S. Hochreiter and J. Schmidhuber. �Long short-term memory�. In: Neural computation 8 (1997).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 108 / 157 CityU, HK

Page 114: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

LSTM Architecture

ct−1 × + ct

× ×

ht−1

ft it ~ct ot

ht

σ σ tanh σ

xt

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 109 / 157 CityU, HK

Page 115: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Gated Recurrent Unit, GRU22

Two gates: update gate z and reset gate r.

rt = σ(Wrxt +Urht−1) (82)

zt = σ(Wzxt +Uzht−1) (83)

ht = tanh(Wcxt +U(rt � ht−1)), (84)

ht = zt � ht−1 + (1− zt)� ht , (85)

(86)

22 K. Cho et al. �Learning phrase representations using rnn encoder-decoder for statistical machine translation�.In: arXiv preprint arXiv:1406.1078 (2014); J. Chung et al. �Empirical evaluation of gated recurrent neural networkson sequence modeling�. In: arXiv preprint arXiv:1412.3555 (2014).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 110 / 157 CityU, HK

Page 116: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

GRU Architecture

ht−1 ×

zt rt

×

tanh

1−

~ht

σ σ

×

+ ht

xt

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 111 / 157 CityU, HK

Page 117: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Stacked RNN

y1 y2 y3 y4 · · · yT

h(3)1 h

(3)2 h

(3)3 h

(3)4 · · · h

(3)T

h(2)1 h

(2)2 h

(2)3 h

(2)4 · · · h

(2)T

h(1)1 h

(1)2 h

(1)3 h

(1)4 · · · h

(1)T

x1 x2 x3 x4 · · · xT

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 112 / 157 CityU, HK

Page 118: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Bidirectional RNN

h(1)t = f (U(1)h

(1)t−1 +W(1)xt + b(1)), (87)

h(2)t = f (U(2)h

(2)t+1 +W(2)xt + b(2)), (88)

ht = h(1)t ⊕ h

(2)t . (89)

y1 y2 y3 y4 · · · yT

· · ·

h(2)1

h(2)2

h(2)3

h(2)4 · · · h

(2)T

h(1)1

h(1)2

h(1)3

h(1)4 · · · h

(1)T

x1 x2 x3 x4 · · · xT

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 113 / 157 CityU, HK

Page 119: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Application of RNN: Sequence to Category

Text Classi�cation

Sentiment Classi�cation

h y

h1 h2 · · · hT

x1 x2 · · · xT

(c) Mean

y

h1 h2 · · · hT

x1 x2 · · · xT

(d) Last

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 114 / 157 CityU, HK

Page 120: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Application of RNN: Synchronous Sequence to Sequence

Sequence Labeling, such as Chinese word segmentation, POS tagging

y1 y2 · · · yT

h1 h2 · · · hT

x1 x2 · · · xT

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 115 / 157 CityU, HK

Page 121: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Application of RNN: Asynchronous Sequence to Sequence

Machine Translation

y1 y2 · · · yM

h1 h2 · · · hT hT+1 hT+2 · · · hT+M

x1 x2 · · · xT < EOS >

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 116 / 157 CityU, HK

Page 122: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Unfolded LSTM for Text Classi�cation

h1 h2 h3 h4 · · · hT softmax

x1 x2 x3 x4 xT y

Drawback: long-term dependencies need to be transmitted one-by-onealong the sequence.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 117 / 157 CityU, HK

Page 123: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Unfolded Multi-Timescale LSTM23

g31 g3

2 g33 g3

4 · · · g3T

g21 g2

2 g23 g2

4 · · · g2T softmax

g11 g1

2 g13 g1

4 · · · g1T y

x1 x2 x3 x4 xT

23 P. Liu et al. �Multi-Timescale Long Short-Term Memory Neural Network for Modelling Sentences andDocuments�. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2015.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 118 / 157 CityU, HK

Page 124: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

LSTM for Sentiment Analysis

<s> Is this progress ? </s>

0.2

0.3

0.4

0.5

LSTMMT-LSTM

<s> He ’d create a movie better than this . </s>

0

0.2

0.4

0.6

0.8

LSTMMT-LSTM

<s> It ’s not exactly a gourmetmeal but the fare is fair , even coming from the drive . </s>

0

0.2

0.4

0.6

0.8

LSTMMT-LSTM

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 119 / 157 CityU, HK

Page 125: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Paragraph Vector24

24 Q. V. Le and T. Mikolov. �Distributed representations of sentences and documents�. In:arXiv preprint arXiv:1405.4053 (2014).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 120 / 157 CityU, HK

Page 126: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recurrent Neural Network

Memory Mechanism

What di�erences among the various models from memory view?

Short-term Long short-term Global External

SRN Yes No No No

LSTM/GRU Yes Yes Maybe No

PV Yes Yes Yes No

NTM/DMN Yes Yes Maybe Yes

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 121 / 157 CityU, HK

Page 127: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recursive Neural Network

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 122 / 157 CityU, HK

Page 128: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recursive Neural Network

Recursive Neural Network (RecNN)25

a,Det red,JJ bike,NN

red bike,NP

a red bike,NP

Topological models composethe sentence representationfollowing a given topologicalstructure over the words.

Given a labeled binary parse tree,((p2 → ap1), (p1 → bc)), the noderepresentations are computed by

p1 = f (W

[b

c

]),

p2 = f (W

[a

p1

]).

25 R. Socher et al. �Parsing with compositional vector grammars�. In: Proceedings of ACL. 2013.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 123 / 157 CityU, HK

Page 129: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recursive Neural Network

Recursive Convolutional Neural Network26

Recursive neural network can only process the binary combination and isnot suitable for dependency parsing.

a,Det red,JJ bike,NN

Convolution

Pooling

a red bike,NN

a bike,NN red bike,NN

introducing the convolution andpooling layers;modeling the complicatedinteractions of the head wordand its children.

26 C. Zhu et al. �A Re-Ranking Model For Dependency Parser With Recursive Convolutional Neural Network�.In: Proceedings of Annual Meeting of the Association for Computational Linguistics. 2015.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 124 / 157 CityU, HK

Page 130: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recursive Neural Network

Tree-Structured LSTMs27

Natural language exhibits syntactic properties that would naturally combinewords to phrases.

Child-Sum Tree-LSTMsN-ary Tree-LSTMs

27 K. S. Tai, R. Socher, and C. D. Manning. �Improved semantic representations from tree-structured longshort-term memory networks�. In: arXiv preprint arXiv:1503.00075 (2015).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 125 / 157 CityU, HK

Page 131: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Recursive Neural Network

Gated Recursive Neural Network28

Rainy

下 雨Day

天Ground

地 面Accumulated water

积 水

M E SB

DAG based Recursive NeuralNetworkGating mechanism

An relative complicated solution

GRNN models the complicatedcombinations of the features, whichselects and preserves the usefulcombinations via reset and updategates.

28 X. Chen et al. �Gated Recursive Neural Network For Chinese Word Segmentation�. In:Proceedings of Annual Meeting of the Association for Computational Linguistics. 2015; X. Chen et al. �SentenceModeling with Gated Recursive Neural Network�. In:Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2015.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 126 / 157 CityU, HK

Page 132: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Attention Model

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 127 / 157 CityU, HK

Page 133: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Attention Model

Attention

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 128 / 157 CityU, HK

Page 134: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Neural Models for Representation Learning Attention Model

Attention Model29

1 The context vector ci is computedas a weighted sum of hi :

ci =T∑j=1

αijhj

2 The weight αij is computed by

αij = softmax(vT tanh(Wsi−1+Uhj))

29 D. Bahdanau, K. Cho, and Y. Bengio. �Neural Machine Translation by Jointly Learning to Align andTranslate�. In: ArXiv e-prints (Sept. 2014). arXiv: 1409.0473 [cs.CL].

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 129 / 157 CityU, HK

Page 135: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Generation

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 130 / 157 CityU, HK

Page 136: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Generation

RNN-LM

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 131 / 157 CityU, HK

Page 137: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Generation

Unfolded RNN-LM

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 132 / 157 CityU, HK

Page 138: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Generation

Word VS. Character Modeling

29Chris Dyer, Should Model Architecture Re�ect Linguistic Structure?

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 133 / 157 CityU, HK

Page 139: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Generation

Composition of Lyric

汪峰作词机30

我在这里中的夜里就像一场是一种生命的意就像我的生活变得在我一样可我们这是一个知道我只是一天你会怎吗可我们这是我们的是不要为你我们想这有一种生活的时候

30https://github.com/phunterlau/wangfeng-rnn

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 134 / 157 CityU, HK

Page 140: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Generation

Poetry Generation31

31 X. Zhang and M. Lapata. �Chinese Poetry Generation with Recurrent Neural Networks.� In: EMNLP. 2014,pp. 670�680.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 135 / 157 CityU, HK

Page 141: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Question Answering

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 136 / 157 CityU, HK

Page 142: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Question Answering

Question Answering

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 137 / 157 CityU, HK

Page 143: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Question Answering

LSTM32

32 K. M. Hermann et al. �Teaching machines to read and comprehend�. In:Advances in Neural Information Processing Systems. 2015, pp. 1684�1692.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 138 / 157 CityU, HK

Page 144: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Question Answering

Dynamic Memory Networks33

33 A. Kumar et al. �Ask me anything: Dynamic memory networks for natural language processing�. In:arXiv preprint arXiv:1506.07285 (2015).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 139 / 157 CityU, HK

Page 145: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Question Answering

Memory Networks34

34 S. Sukhbaatar, J. Weston, R. Fergus, et al. �End-to-end memory networks�. In:Advances in Neural Information Processing Systems. 2015, pp. 2431�2439.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 140 / 157 CityU, HK

Page 146: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Question Answering

Neural Reasoner35

35 B. Peng et al. �Towards Neural Network-based Reasoning�. In: arXiv preprint arXiv:1508.05508 (2015).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 141 / 157 CityU, HK

Page 147: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Machine Translation

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 142 / 157 CityU, HK

Page 148: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Machine Translation

Current Statistical Machine Translation

Neural machine translation is a recently proposed framework for machinetranslation based purely on neural networks.

Source Language: fTarget Lauguage: eModel: e = argmaxe p(e|f ) = argmaxe p(f |e)p(e)

p(f |e): translation modelp(e): language model

35Image Source: http://www.statmt.org/wpt05/mt-shared-task/

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 143 / 157 CityU, HK

Page 149: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Machine Translation

Sequence to Sequence Machine Translation36

Neural machine translation is a recently proposed framework for machinetranslation based purely on neural networks.

One RNN for encoding;Another RNN for decoding.

36 I. Sutskever, O. Vinyals, and Q. V. Le. �Sequence to sequence learning with neural networks�. In:Advances in Neural Information Processing Systems. 2014, pp. 3104�3112.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 144 / 157 CityU, HK

Page 150: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Machine Translation

Image Caption373839

37 A. Karpathy and L. Fei-Fei. �Deep visual-semantic alignments for generating image descriptions�. In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, pp. 3128�3137.38 O. Vinyals et al. �Show and tell: A neural image caption generator�. In:

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, pp. 3156�3164.39 K. Xu et al. �Show, attend and tell: Neural image caption generation with visual attention�. In:

arXiv preprint arXiv:1502.03044 (2015).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 145 / 157 CityU, HK

Page 151: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Machine Translation

Image Caption

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 146 / 157 CityU, HK

Page 152: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Matching

Table of Contents

1 Basic ConceptsArti�cial IntelligenceMachine LearningDeep LearningImplementation

2 Neural Models for Representation LearningGeneral ArchitectureConvolutional Neural NetworkRecurrent Neural NetworkRecursive Neural NetworkAttention Model

3 ApplicationsText GenerationQuestion AnsweringMachine TranslationText Matching

4 Challenges & Open Problems

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 147 / 157 CityU, HK

Page 153: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Matching

Text Matching

Among many natural language processing (NLP) tasks, such as textclassi�cation, question answering and machine translation, a commonproblem is modelling the relevance/similarity of a pair of texts, which isalso called text semantic matching.Three types of interaction models:

Weak interaction ModelsSemi-interaction ModelsStrong Interaction Models

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 148 / 157 CityU, HK

Page 154: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Matching

Weak interaction Models

Some early works focus on sentence level interactions, such as ARC-I40,CNTN41 and so on. These models �rst encode two sequences intocontinuous dense vectors by separated neural models, and then computethe matching score based on sentence encoding.

question+f +

answer MatchingScore

Figure: Convolutional Neural Tensor Network

40 Hu et al., �Convolutional neural network architectures for matching natural language sentences�.41 X. Qiu and X. Huang. �Convolutional Neural Tensor Network Architecture for Community-based Question

Answering�. In: Proceedings of International Joint Conference on Arti�cial Intelligence. 2015.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 149 / 157 CityU, HK

Page 155: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Matching

Semi-interaction Models

Another kind of models use soft attention mechanism to obtain therepresentation of one sentence by depending on representation of anothersentence, such as ABCNN42, Attention LSTM43. These models canalleviate the weak interaction problem to some extent.

Figure: Attention LSTM44

42 W. Yin et al. �ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs�. In:arXiv preprint arXiv:1512.05193 (2015).43 Hermann et al., �Teaching machines to read and comprehend�.44 T. Rockt schel et al. �Reasoning about Entailment with Neural Attention�. In:

arXiv preprint arXiv:1509.06664 (2015).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 150 / 157 CityU, HK

Page 156: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Applications Text Matching

Strong Interaction Models

The models build the interaction at di�erent granularity (word, phrase andsentence level), such as ARC-II45, MV-LSTM46, coupled-LSTMs47. The�nal matching score depends on these di�erent levels of interactions.

Figure: coupled-LSTMs

45 Hu et al., �Convolutional neural network architectures for matching natural language sentences�.46 S. Wan et al. �A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations�.In: AAAI. 2016.47 P. Liu, X. Qiu, and X. Huang. �Modelling Interaction of Sentence Pair with coupled-LSTMs�. In:

arXiv preprint arXiv:1605.05573 (2016).

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 151 / 157 CityU, HK

Page 157: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Challenges & Open Problems

Conclusion of DL4NLP (just kidding)

Long long ago: you must know intrinsic rules of data.Past ten years: you just know e�ective features of data.Nowadays: you just need to have big data.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 152 / 157 CityU, HK

Page 158: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Challenges & Open Problems

Challenges & Open Problems

Depth of networkRare wordsFundamental data structureLong-term dependenciesNatural language understanding & reasoningBiology inspired modelUnsupervised learning

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 153 / 157 CityU, HK

Page 159: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Challenges & Open Problems

DL4NLP from scratch

Select a real problemTranslate your problem to (supervised) learning problemPrepare your data and hardware (GPU)Select a library: Theano, Keras, TensorFlowFind an open source implementationIncrementally writing your own codeRun it.

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 154 / 157 CityU, HK

Page 160: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Challenges & Open Problems

Seq2Seq48

48https://twitter.com/IAugenstein/status/710837374473920512

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 155 / 157 CityU, HK

Page 161: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Challenges & Open Problems

More Information

http://nlp.fudan.edu.cn/dl-book/http://vdisk.weibo.com/s/A_pmE4iIotGf

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 156 / 157 CityU, HK

Page 162: Deep Learning for Natural Language Processing › slides › 20160618_DL4NLP@CityU.pdf · Machine Learning Deep Learning Implementation 2 Neural Models for Representation Learning

Challenges & Open Problems

Recommended Courses

CS224d: Deep Learning for Natural Language Processing

http://cs224d.stanford.edu/斯坦福大学Richard Socher主要讲解自然语言处理领域的各种深度学习模型

CS231n:Convolutional Neural Networks for Visual Recognition

http://cs231n.stanford.edu/斯坦福大学Fei-Fei Li Andrej Karpathy主要讲解CNN、RNN在图像领域的应用

Xipeng Qiu (Fudan University) Deep Learning for Natural Language Processing 157 / 157 CityU, HK