generating visual explanations lisa et al. · generating visual explanations lisa et al. t–˜...

20
Generating Visual Explanations Lisa et al. t˜ Seoul National University [email protected] Nov 15, 2018 1/20

Upload: others

Post on 22-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Generating Visual Explanations

Lisa et al.

이종진

Seoul National University

[email protected]

Nov 15, 2018

1/20

Page 2: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Explainable AI; Generating Visual Explanations

I Deep classification methods have had tremendous success in visual

reconition.

I Most of them cannot provide a consistent justification of why it made a

certain prediction.

2/20

Page 3: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Explainable AI; Generating Visual Explanations

I Proposed model predicts a class label(CNN), and explains why the

predicted label is appropriate for the image(RNN)

I First method to produce deep visual explanations using language

justifications

I Provide an explanation not a description

3/20

Page 4: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Visual Explanation

Description: This is a large bird with a white neck and

a black back in the water

Class Definition: The Western Grebe is a waterbird

with a yellow pointly beak, white neck and belly, and

black back.

Explanation: This is a Western Grebe because this bird

has a long white neck, pointly yellow beak and red eye.

I Explanation should be class discriminative!!

4/20

Page 5: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Visual Explanation

I Visual explanation are both image relevant and class relevant.

I Discriminate class and accurately describe a specific image instance.

→ Novel Loss function.

5/20

Page 6: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Proposed Model

I Input : Image (+ Descriptive Sentences)

I Output : This is a CLASS, because argument 1 and argument 2 and...

I Use pretrained CNN(Compact bilinear fine- grained classificaiton model),

Sentence classifier(Single Layer LSTM)

I Two contributions are using a predicted label as a input and using novelloss(discrimiative loss) for image relevance and class relevance

1. Use a predicted label as a input

2. Propose a novel reinforcement learing based loss for image relevance and

class relevance

6/20

Page 7: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Architecture

Figure: Architecture

7/20

Page 8: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Bilinear Models

I f : L× I 7→ Rc×D , a location L and image I

I fA, fB : use pretrained VGG

I Use pooling operation P(fA(l , I)T fB(l , I), l ∈ L)

I (e.g) φ(I) =∑l∈L

fA(l , I)T fB(l , I)

8/20

Page 9: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Proposed loss

I Proposed loss

LR − λEw̃∼pL(w)[RD(w̃)]

I Relevance loss(LR) is related with "Image Relevance"

I Discriminiative loss(Ew̃∼pL(w)[RD(w̃)]) is related with "Class Relevance"

9/20

Page 10: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Relevance Loss

I Relevance Loss(LR)

LR =1N

N−1∑n=0

T−1∑t=0

log pL(wt+1|wo:t , I,C)

– wt : ground truth word at t, I : image, C : category, N : batch size

– Average hidden state of the LSTM

10/20

Page 11: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

I Discriminative Loss

Ew̃∼pL(w)[RD(w̃)]

– Based on a reinforcement learning paradigm.

– RD(w̃) = pD(C|w̃)

– pD(C|w) : pretrained sentence classifier

– The accuracy of this classifier(pretrained) is not important (22%)

– w̃ : sampled sentences from LSTM(pL(w))

11/20

Page 12: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Novel Loss

I Relevance Loss

LR =1N

N−1∑n=0

T−1∑t=0

log pL(wt+1|wo:t , I,C)

I Discriminative Loss

RD(w̃) = pD(C|w̃)

– The accuracy of this classifier(pretraine) is not important (22%)

I Proposed Loss

LR − λEw̃∼pL(w)[RD(w̃)]

12/20

Page 13: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Minimizing Loss

I Since expectation over descriptions is intractable, use Monte Carlo

sampling from LSTM.

I ∇Ew̃∼pL(w)[RD(w̃)] = Ew̃∼pL(w)[RD(w̃)∇WL logP(w̃)]

I The final gradient to update the weights W

∇WLLR − λRD(w̃)∇WL logP(w̃)

13/20

Page 14: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Experiment

I Dataset : Caltech UCSD Birds 200-2011(CUB)

– Contains 200 classes of North American bird species.

– 11,788 images

– 5 sentences for detail description of the bird(These are not collected for the

task of visual explanation.)

I 8,192 dimensional features from the classifier

– Features from the penultimate layer of the compact bilinear fine-grained

classification model

– Pre-trained on the CUB dataset

– accuracy : 84%

I LSTM

– 1000-dimensional embedding, 1000 dimensional LSTM

14/20

Page 15: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Experiment

I Baseline models : Description model & Definition model

– Description model : Training the model by conditioning only on the image

features as input

– Definition model : Training the model to generate explaining sentences only

using the image label as input

I Abalation models : Explation-label model & Explanation-discriminative

model

15/20

Page 16: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Measure

I METEOR(Image relevance)

– METEOR is computed by matching words(synonyms) in generated and

reference sentences

I CIDEr(Image relevance)

– CIDEr measures the similarity of a generated sentence to reference sentence

by counting common n-grams which are TF-IDF weighted.

I Similarity(class relevance)

– Compute CIDEr scores using all reference sentences which correspond to a

particular class, instead of using ground truth

I Rank(class relevance)

– Ranking over similarity of all classes

16/20

Page 17: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Experiment : Results

Figure: Result

17/20

Page 18: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Experiment : Results

I Comparison of Explanations, Baselines, and Ablations.

– Green : correct, Yellow : mostly correct, Red : incorrect

– ’Red eye’ is a class relevant attributes.

18/20

Page 19: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Experiment : Results

I Comparison of Explanations and Definitions

– Definition can produce sentencesd which are not image relevant

19/20

Page 20: Generating Visual Explanations Lisa et al. · Generating Visual Explanations Lisa et al. t–˜ Seoul National University ga0408@snu.ac.kr Nov15,2018 1/20

Experiment : Results

I Role of Discriminative Loss

– Both models generate visually correct sentences.

– ’Black head’ is one of the most prominent distinguishing properties of this

vireo type.

20/20