handing uncertain observations in unsupervised topic-mixture language model adaptation

Post on 22-Feb-2016

34 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Handing Uncertain Observations in Unsupervised Topic-Mixture Language Model Adaptation. Ekapol Chuangsuwanich 1 , Shinji Watanabe 2 , Takaaki Hori 2 , Tomoharu Iwata 2 , James Glass 1. - PowerPoint PPT Presentation

TRANSCRIPT

Handing Uncertain Observations in Unsupervised Topic-MixtureLanguage Model Adaptation

Ekapol Chuangsuwanich1, Shinji Watanabe2,Takaaki Hori2, Tomoharu Iwata2, James Glass1

報告者:郝柏翰2013/03/05

ICASSP 2012

1MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, Massachusetts, USA2NTT Communication Science Laboratories, NTT Corporation, Japan

2

Outline

• Introduction

• Topic Tracking Language Model(TTLM)

• TTLM Using Confusion Network Inputs(TTLMCN)

• Experiments

• Conclusion

3

Introduction

• In a real environment, acoustic and language features often vary depending on the speakers, speaking styles and topic changes.

• To accommodate these changes, speech recognition approaches that include the incremental tracking of changing environments have attracted attention.

• This paper proposes a topic tracking language model that can adaptively track changes in topics based on current text information and previously estimated topic models in an on-line manner.

4

TTLM

• Tracking temporal changes in language environments

5

TTLM

• A long session of speech input is divided into chunks

• Each chunk is modeled by different topic distributions

• The current topic distribution depends on the topic distribution of the past H chunks and precision parameters α as follows:

Tt ,...,2,1

Kktkt 1

K

ktk

Hhthhtt

tkP1

1)ˆ*(1)},ˆ{|(

6

TTLM

• With the topic distribution, the unigram probability of a word wm in the chunk can be recovered using the topic and word probabilities

• Where θ is the unigram probabilities of word wm in topic k

• The adapted n-gram can be used for a 2nd pass recognition for better results.

K

k kwtkm mwP1ˆ)(

7

TTLMCN

• Consider a confusion network with M word slots.

• Each word slot m can contain different number of arcs Am

• with each arc containing a word wma and a corresponding arc posterior dma.

• Sm is binary selection parameter, where sm = 1 indicates that the arc is selected.

chunk1 chunk2 chunk3

slot1 slot2 slot3

A1=3 …

8

TTLMCN

ma

mama

mam

mma

m

swe

swz

A

a

sma

N

mtzttettt dDSZWP 1),,,|,,(

• For each chunk t, we can write the joint distribution of words, latent topics and arc selections conditioned on the topic probabilities, unigram probabilities, and arc posteriors as follows:

9

TTLMCN

• Graphical representation of TTLMCN

10

Experiments(MIT-OCW)

• MIT-OCW is mainly composed of lectures given at MIT. Each lecture is typically two hours long. We segmented the lectures using Voice Activity Detectors into utterances averaging two seconds each.

11

Compare with TTLM and TTLMCN

• We can see that the topic probability of TTLMCNI is more similar to the oracle experiment than TTLM, especially in the low probability regions.

• KL between TTLM and ORACLE was 3.3, TTLMCN was 1.3

12

Conclusion

• We described an extension for the TTLM in order to handle errors in speech recognition. The proposed model used a confusion network as input instead of just one ASR hypothesis which improved performance even in high WER situations.

• The gain in word error rate was not very large since the LM typically contributed little to the performance of LVCSR.

13

Significance Test (T-Test)

H0:實驗組與對照組的常態分佈一致H1:實驗組與對照組的常態分佈不一致

?

?t

14

Significance Test (T-Test)

• Significance Test (T-Test)

• Example X 5 7 5 3 5 3 3 9

Y 8 1 4 6 6 4 1 2

5,40 xx M

4,32 yy M

571.42 xS

571.62 yS

)/( nSXt

8571.6

8571.4

45

B10,1,2):B1A10,:T.TEST(A1

top related