deep learningsebag/slides/aic_2018_ad_da.pdf · 2018. 10. 24. · transductive transfert learning i...

52
Deep Learning Alexandre Allauzen, Mich` ele Sebag CNRS & Universit´ e Paris-Sud Oct. 17th, 2018 Credit for slides: Sanjeev Arora; Yoshua Bengio;Yann LeCun; Nando de Freitas; Pascal Germain; L´ eon Gatys; Weidi Xie; Max Welling; Victor Berger; Kevin Frans; Lars Mescheder et al.; Mehdi Sajjadi et al.; Pascal Germain et al.; Ian Goodfellow; Arthur Pesah 1 / 52

Upload: others

Post on 03-Nov-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Deep Learning

Alexandre Allauzen, Michele SebagCNRS & Universite Paris-Sud

Oct. 17th, 2018Credit for slides: Sanjeev Arora; Yoshua Bengio;Yann LeCun; Nando de

Freitas; Pascal Germain; Leon Gatys; Weidi Xie; Max Welling; VictorBerger; Kevin Frans; Lars Mescheder et al.; Mehdi Sajjadi et al.; Pascal

Germain et al.; Ian Goodfellow; Arthur Pesah

1 / 52

Page 2: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

2 / 52

Page 3: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

What happens when perturbing an example ?

Anything !

3 / 52

Page 4: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Informed perturbations

Goodfellow et al. 15

For x ′ perturbed from x

F (x ′, θ) ≈ F (x , θ) + 〈x − x ′,∇xF (x , θ)〉

Nasty small perturbations ?

Maximize 〈x − x ′,∇xF (x , θ)〉

subject to‖x − x ′‖∞ ≤ ε

4 / 52

Page 5: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Example

Goodfellow et al. 15

5 / 52

Page 6: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Example 2

Karpathy et al. 15

6 / 52

Page 7: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

The lesson of adversarial examples

I Good performances do not imply that the NN got it !

I Small modifications are enough to make it change its diagnosis

I Terrible implications for autonomous vehicles !

I An arms race: modify the learning criterion; find adversarial examplesdefeating the modified criterion; iterate

I More in the Course/Seminar !

7 / 52

Page 8: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

8 / 52

Page 9: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

What is domain adaptation ?

some differences should make no difference

Domain adaptation:

I Learning from poor data by leveraging other (not really, not muchdifferent) data

I Teaching the learner to overcome these differences

9 / 52

Page 10: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

10 / 52

Page 11: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Have you been to Stockholm recently ?

11 / 52

Page 12: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

... you recognize the castle ...

regardless of light, style, angle...

12 / 52

Page 13: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Formally

Domain Adaptation

I Task: classification, or regression

I A source domain source distribution Ds

I A target domain target distribution Dt

Idea

I Source and target are “sufficiently” related

I ... one wants to use source data to improve learning from target data

13 / 52

Page 14: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Applications

1. Calibration

2. Physiological signals

3. Reality gap (simulation vs real-world)

4. Lab essays

5. Similar worlds

14 / 52

Page 15: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Application 1. Calibration

Different devices

I same specifications (in principle)

I in practice response function is biased

I Goal: recover the output complying with the specifications.

15 / 52

Page 16: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Application 2. Physiological signals

Won Kyu Lee et al. 2016

Different signals

I Acquired from different sensors (different price, SNR),

I Goal: predict from poor signal

16 / 52

Page 17: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Application 3. Bridging the reality gap

Source world aimed to model target world

I Target (expensive): real-world

I Source (cheap, approximate): simulator

I Goal: getting best of both worlds

In robotics; for autonomous vehicles; for science (e.g. Higgs boson MLchallenge); ...

17 / 52

Page 18: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Application 4. Learning across labs

Schoenauer et al. 18

Many labs, many experiments in quantitative microscopy

I Each dataset: known and unknown perturbations; experimental bias

I Goal: Identify drugs in datasets: in silico discovery.

18 / 52

Page 19: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Application 5. Bridges between worlds

Different domains

I Supposedly related

I One (source) is well-known;

I The other (target) less so: few or no labels

I Goal: Learn faster/better on the target domain

19 / 52

Page 20: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

At the root of domain adaptation; Analogical reasoningHofstadter 1979: Analogy is at the core of cognition

Solar system ↔ Atom and electrons

Bongard IQ tests

20 / 52

Page 21: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Roots of domain adaptation, 2

Training on male mice; testing on male and female mice ?

Relaxing the iid assumption:when training and test distributions differ

I Class ratios are different Kubat et al. 97; Lin et al, 02; Chan and Ng 05

I Marginals are different: Covariate shiftShimodaira 00; Zadrozny 04; Sugiyama et al. 05; Blickel et al. 07

21 / 52

Page 22: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

22 / 52

Page 23: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Settings: Domain adaptation wrt Transfert learningNotations

Joint dis. Marginal Instance dis. Conditional dis.Source Ds Ps(X ) Ps(Y |X )Target Dt Pt(X ) Pt(Y |X )

The settingsI Same instance distributions Ps(X ) = Pt(X )

I Same conditional distributions Ps(Y |X ) == Pt(Y |X ) Usual setting

I Different conditional distributions Ps(Y |X ) 6= Pt(Y |X ) Concept driftInductive transfert learning

I Different instance distributions Ps(X ) 6= Pt(X )I Same conditional distributions Ps(Y |X ) == Pt(Y |X ) Domain adaptation

Transductive transfert learning

I Different conditional distributions Ps(Y |X ) 6= Pt(Y |X ) Concept driftUnsupervised transfert learning

NB: For some authors, all settings but the usual one are Transfer learning.NB: Multi-task, dom(Ys) 6= dom(Yt)NB: A continuum from Domain Adaptation to Transfer Learning to Multi-tasklearning

23 / 52

Page 24: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Examples of concept drift

I Which speed reached depending on the actuator value ?decreases as the motor is aging

I The concept of “chic” ?depends on the century nice, cool, ...

Related: Lifelong learning

Shameless ad for AutoML3: AutoML for Lifelong ML-2018

24 / 52

Page 25: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Toy example of domain adaptation: the intertwining moons

25 / 52

Page 26: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Settings, 2

General assumptions

I Wealth of information about source domain

I Scarce information about target domain

Domain Adaptation aims at alleviating the costs

I of labelling target examples

I of acquiring target examples

No target labels Unsupervised Domain Adaptation

Partial labels Partially unsupervised Domain Adaptation

Few samples Few-shot Domain Adaptation

26 / 52

Page 27: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

27 / 52

Page 28: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Key Concept: Distance between source and target marginaldistributions

1. The larger, the more difficult the domain adaptation

2. Can we measure it ? for theoryif so, turn the measure into a loss, to be minimized

3. Can we reduce it ? for algorithms

The 2 moons problem

28 / 52

Page 29: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Domain adaptation, intuition

What we have What we want

29 / 52

Page 30: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Distance between source and target marginal distributions, followed

Main strategies

I Reduce it in original space XImportance sampling

I Modify source representationOptimal transport

I Map source and target onto a third latent spaceDomain adversarial

I Build generative mechanisms in latent spaceGenerative approaches

Milestone: defining distances on distributions

30 / 52

Page 31: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Discrepancy between source and target marginal distributionsBen-David 06, 10

H Divergence between Ps and Pt

dX (Ps ,Pt) = 2suph∈H|Prx ∼Ps (h(x) = 1)− Prx∼Pt (h(x) = 1)|

This divergence is high if there exists h separating Ps and Pt .

Perfect separation case

⇒ no guarantee of generalization.

31 / 52

Page 32: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Discrepancy between source and target marginal distributions, 2Ben-David 06, 10

dX (Ps ,Pt) = 2suph∈H|Prx ∼Ps (h(x) = 1)− Prx∼Pt (h(x) = 1)|

Perfect mixt case

⇒ what is learned on source32 / 52

Page 33: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Discrepancy between source and target marginal distributions, 3

Ben-David et al. 2006, 2010

Approximation of H divergence Proxy A-distance (PAD)

dX (Ps ,Pt) = 2

(1−min

h

(1

n

∑i

1h(xi )=0 +1

n′

∑j

1h(x′j )=1

))

The divergence can be approximated by the ability to empirically discriminatebetween source and target examples.

CommentEstimation of distribution differences → two-sample tests.

33 / 52

Page 34: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Bounding the domain adaptation risk

Ben-David et al. 2006, 2010

Notations

I Rs(h) = IEDsL(h) risk of h under source distribution

I Rt(h) = IEDtL(h) risk of h under target distribution

TheoremWith probability 1− δ, if d(H) is the VC-dimension of H,

Rt(h) ≤ Rs(h) + dX + C

√4

n(d(H)log

2

d+ log

4

δ) + Best possible

andBest possible = inf

h(RS(h) + RT (h))

What we want (risk on h wrt DT ) is bounded by:

I empirical risk on source domain

I + Proxy A-distance

I + error related to possible overfitting

I + min error one can achieve on both source and target distribution.

34 / 52

Page 35: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Interpretation

Ben-David et al. 2006, 2010

The regretWith probability 1− δ, if d(H) is the VC-dimension of H,

Rt(h)− Best possible ≤ Rs(h) + C

√4

n(d(H)log

2

d(H)+ log

4

δ) + dX

Hence a domain adaptation strategy:

I Choose H with good potential

I Minimize dX : through transporting source data; or mapping source andtarget toward another favorable space.

35 / 52

Page 36: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

36 / 52

Page 37: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Extending Adversarial Ideas to Domain Adaptation

InputEs = {(xs,i , yi ), i = [[1, n]]}

Et = {(xt,j), j = [[1,m]]}

Principle

I What matters is the distance between Ds and Dt Ben David et al. 2010

I Strategy: mapping both on a same latent space in an indistinguishablemanner

Ganin et al., 2015; 2016

37 / 52

Page 38: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Domain Adversarial Neural Net

Ganin et al. 2015; 2016

Adversarial Modules

I Encoder Gf greenxs 7→ Gf (xs); xt 7→ Gf (xt)

I Discriminator Gd : trained from {(Gf (xs,i ), 1)} ∪ {(Gf (xt,j), 0)} red

Find maxGf

minGd

L(Gd ,Gf )

And a Classifier Module

I Gy : L(Gy ) =∑

i `(Gy (Gf (xs,i )), yi ) blue

I NB: needed to prevent trivial solution Gf ≡ 0

38 / 52

Page 39: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

DANN, 2

Ganin et al. 2015; 2016

Training

1. Classifier: backprop from ∇(L(Gy )) blue

2. Encoder: backprop from ∇(L(Gy )) and −∇(L(Gd)) green

3. Discriminator: backprop from ∇(L(Gd)) red

39 / 52

Page 40: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

The algorithm

40 / 52

Page 41: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

The intertwinning moons

I left: the decision boundary

I 2nd left: apply PCA on the feature layer

I 3rd left: discrimination source vs target

I right: each line corresponds to hidden neuron = .5

41 / 52

Page 42: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Mixing the distributions in latent space

MNIST → MNIST-M

Syn → SVHN

42 / 52

Page 43: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Evaluation

Top: SVHN; Bottom: MNIST

Usual practice

I The reference experiment: adapting from Street View House Numbers(SVHN, source) to MNIST (handwritten digits)

I Score: accuracy on the test set of MNIST.I Caveat: reported improvements might come from:

1. algorithm novelty;2. neural architecture;3. hyperparameter tuning ?

I Lesion studies are required !

43 / 52

Page 44: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Experimental setting

Ganin et al., 16

The datasets

I MNIST: as usual

I MNIST-M: blend with patches randomly extracted from color photos fromBSDS500

I SVHN: Street-View House Number dataset

I Syn Numbers: figures from WindowsTM fonts, varying positioning,orientation, background and stroke colors, blur.

I Street Signs: real (430) and synthetic (100,000)

44 / 52

Page 45: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Results

Ganin et al., 16

Score DANN: 74%

45 / 52

Page 46: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Adversarial examples

Domain Adaptation: Formal background

IntroductionPosition of the problemApplicationsSettings

Key concept: distance between source and target distributions

Some Domain Adaptation AlgorithmsDomain Adversarial Neural NetworkEvaluating DA algorithmsDANN improvements and relaxations

46 / 52

Page 47: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Decoupling the encoder: ADDA

Tzeng et al., 2017

Adversarial Discriminative Domain Adaptation (ADDA)

I DANN used a single encoder Gf for both source and target domains

I ADDA learns Gf ,s and Gf ,t independently, both subject to Gd (domaindiscriminator); and Gf ,s subject to Gy

I Rationale: makes it easier to handle source and target with differentdimensionality, specificities,...

Score DANN: 74%Score ADDA: 76%

47 / 52

Page 48: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Replacing domain discrimination with reconstruction: DRCNGhifary et al., 2016

Deep Reconstruction-Classification Networks (DRCN)

I DANN used a discriminator Gd to discriminate Gf (xt) and Gf (xs)

I DRCN replaces Gd with a decoder s.t. Gd(Gf (xt)) ≈ xtI Rationale: The latent space preserves all information from target, while

enabling classification on source.

Score DANN: 74%Score ADDA: 76%Score DRCN: 82%

48 / 52

Page 49: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Hybridizing ADDA and DRCN: Deep Separation NetworksBousmalis et al., 2016

Deep Separation Networks (DSN)I Encoder:

I A shared part Gf ,uI A private source part Gf ,sI A private domain part Gf ,t

I Discriminator → DecoderI Gd (Gf ,u(xs),Gf ,s(xs)) ≈ xsI Gd (Gf ,u(xt),Gf ,t(xt)) ≈ xt

(. . . stands for “shared weights”)Score DANN: 74%Score ADDA: 76%Score DRCN: 82%Score DSN: 82.7%

49 / 52

Page 50: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Not covered...

I Optimal transport Couturi Peyre 18, Courty et al. 17,18

I Generative Networks and domain to domain translationsTaigman et al. 16; Sankaranarayanan et al. 17; Liu et al. 17

Choi et al. 17; Anoosheh et al., 2017; Shu et al. 18

I Partial domain adaptation Motiian et al. 17a, b; Schoenauer-Sebag 18

50 / 52

Page 51: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Conclusions

Theory and Validation

I Most theoretical analysis relies on Ben David et al. 06; 10

I When using feature space, something is underlooked (see DRCN).

I Comprehensive ablation studies needed to assess the mixture oflosses/architectures

I Assessing the assumptions

Applications

I Many applications on vision The Waouh effect ?

I Reinforcement learning !

I Natural Language processing !

51 / 52

Page 52: Deep Learningsebag/Slides/AIC_2018_Ad_DA.pdf · 2018. 10. 24. · Transductive transfert learning I Di erent conditional distributions Ps(YjX) 6= Pt(YjX) Concept drift Unsupervised

Take home message

What is domain adaptation:

I Playing with tasks and distributions

I Making assumptions about how they are related

I Testing your assumptions

Domain adaptation is like playing Lego with ML

52 / 52