專題研究 (2) feature extraction, acoustic model training wfst decoding prof. lin-shan lee, ta....

Post on 20-Jan-2016

220 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

專題研究 (2)Feature Extraction,Acoustic Model TrainingWFST Decoding

Prof. Lin-Shan Lee, TA. Yun-Chiao Li

1

Announcement

You will probably have many questions from today Go to ptt2 “SpeechProj” Your problem can probably help others

2

Linux Shell Script Basics

echo “Hello” (print “hello” on the screen) a=ABC (assign ABC to a) echo $a (will print ABC on the screen) b=$a.log (assign ABC.log to b) cat $b > testfile (write “ABC.log” to testfile)

指令 -h (will output the help information)

3

02.01.extract.feat.sh02.02.convert.htk.feat.sh

Feature Extraction4

Feature Extraction - MFCC5

02.01.extract.feat.sh6

Example of MFCC7

02.02.convert.htk.feat.sh8

Hidden Markov Model Toolkit (HTK) is the model we used to use

In this project, we learn Kaldi Vulcan provide an interface to convert one

to another

Type “bash 02.02.convert.htk.feat.sh” The feature will then be converted to HTK

format

03.01.mono0a.train.sh

Acoustic Model Training9

Acoustic Model

Hidden Markov Model/Gaussian Mixture Model

3 states per model Example

10

10

Acoustic model training (1/2) When training acoustic model, we need labelled

data

material/train.txt

03.01.mono0a.train.sh

11

Lacks the information to train

initialized the HMM model with equally aligning frame

to each state

Gaussian Mixture Model (GMM) accumulation and

estimation.you might want to check

“HMM Parameter Estimation ” in HTK Book,

or “HMM problem 3” in course

Acoustic model training (2/2)

12

Refine the alignment in some

specific iterations, (in variable realign_iters)

Introduction to WFST13

FST14

An FSA “accepts” a set of strings View FSA as a representation of a possibly infinite set

of strings Start state(s) bold; final/accepting states have extra

circle. This example represents the infinite set {ab, aab,

aaab , . . .}

WFST15

Like a normal FSA but with costs on the arcs and final-states

Note: cost comes after “/”, For final-state, “2/1” means final-cost 1 on state 2.

This example maps ab to (3 = 1 + 1 + 1), all else to 1.

WFST Composition16

Notation: C = A B means, C is A composed with B

WFST Component

HCLG = H 。 C 。 L 。 G H: HMM structure C: Context-dependent relabeling L: Lexicon G: language model acceptor

17

Framework for Speech Recognition

18

WFST Component19

L(Lexicon)

H (HMM)

G (Language Model)

Where is C ?(Context-

Dependent)

Training WFST 20

03.02.mono0a.mkgraph.sh

03.02.mono0a.mkgraph.sh21

03.03.mono0a.fst.sh

Decoding WFST22

Decoding WFST (1/2)

From HCLG we have… the relationship from state -> word

We need another WFST, U

Compose U with HCLG, ie, S = U 。 HCLG Search the best path(s) on S is the

recognition result

23

Decoding WFST (2/2)

During decoding, we need to specify the weight respectively for acoustic model and language model

Split the corpus to Train, Test, Dev set Training set used to training acoustic model Test all of the acoustic model weight on Dev set, and

use the best Test set used to test our performance (Word Error

Rate, WER)

24

03.03.mono0a.fst.sh (1/2)25

03.03.mono0a.fst.sh (2/2)26

02.01~03.04.sh

Homework27

To Do28

Copy data into your own directory cp –r /share/

Execute the following command: bash 01.format.data.sh bash 02.01.extract.feat.sh bash 02.02.convert.htk.feat.sh …

Observe the output and report You might want to check HTK book for

acoustic model training

Some Helpful References

“使用加權有限狀態轉換器的基於混合詞與次詞

以文字及語音指令偵測口語詞彙” – 第三章 https://www.dropbox.com/s/

dsaqh6xa9dp3dzw/wfst_thesis.pdf

29

top related