1 專題研究 (2) Feature Extraction, Acoustic Model Training WFST Decoding Prof. Lin-Shan Lee, TA. Yun-Chiao Li Announcement 2 You will probably have many questions from today Go to ptt2 “SpeechProj” Your problem can probably help others Linux Shell Script Basics 3 echo “Hello” (print “hello” on the screen) a=ABC (assign ABC to a) echo $a (will print ABC on the screen) b=$a.log (assign ABC.log to b) cat $b > testfile (write “ABC.log” to testfile) 指令 -h (will output the help information) 4 Feature Extraction 02.01.extract.feat.sh 02.02.convert.htk.feat.sh Feature Extraction - MFCC 5 02.01.extract.feat.sh 6 Example of MFCC 7 02.02.convert.htk.feat.sh 8 Hidden Markov Model Toolkit (HTK) is the model we used to use In this project, we learn Kaldi Vulcan provide an interface to convert one to another Type “bash 02.02.convert.htk.feat.sh” The feature will then be converted to HTK format 9 Acoustic Model Training 03.01.mono0a.train.sh Acoustic Model 10 Hidden Markov Model/Gaussian Mixture Model 3 states per model Example 10 Acoustic model training (1/2) 11 When training acoustic model, we need labelled data material/train.txt Lacks the information to train 03.01.mono0a.train.sh initialized the HMM model with equally aligning frame to each state Gaussian Mixture Model (GMM) accumulation and estimation. you might want to check “HMM Parameter Estimation ” in HTK Book, or “HMM problem 3” in course Acoustic model training (2/2) 12 Refine the alignment in some specific iterations, (in variable realign_iters) 13 Introduction to WFST FST 14 An FSA “accepts” a set of strings View FSA as a representation of a possibly infinite set of strings Start state(s) bold; final/accepting states have extra circle. This example represents the infinite set {ab, aab, aaab , . . .} WFST 15 Like a normal FSA but with costs on the arcs and final-states Note: cost comes after “/”, For final-state, “2/1” means finalcost 1 on state 2. This example maps ab to (3 = 1 + 1 + 1), all else to 1. WFST Composition 16 Notation: C = A B means, C is A composed with B WFST Component 17 HCLG = H。C。L。G H: HMM structure C: Context-dependent relabeling L: Lexicon G: language model acceptor Framework for Speech Recognition 18 WFST Component 19 Where is C ? (Context-Dependent) H (HMM) L(Lexicon) G (Language Model) 20 Training WFST 03.02.mono0a.mkgraph.sh 03.02.mono0a.mkgraph.sh 21 22 Decoding WFST 03.03.mono0a.fst.sh Decoding WFST (1/2) 23 From HCLG we have… the relationship from state -> word We need another WFST, U Compose U with HCLG, ie, S = U。HCLG Search the best path(s) on S is the recognition result Decoding WFST (2/2) 24 During decoding, we need to specify the weight respectively for acoustic model and language model Split the corpus to Train, Test, Dev set Training set used to training acoustic model Test all of the acoustic model weight on Dev set, and use the best Test set used to test our performance (Word Error Rate, WER) 03.03.mono0a.fst.sh (1/2) 25 03.03.mono0a.fst.sh (2/2) 26 27 Homework 02.01~03.04.sh To Do 28 Copy data into your own directory cp –r /share/ Execute the following command: bash 01.format.data.sh bash 02.01.extract.feat.sh bash 02.02.convert.htk.feat.sh … Observe the output and report You might want to check HTK book for acoustic model training Some Helpful References 29 “使用加權有限狀態轉換器的基於混合詞與次詞 以文字及語音指令偵測口語詞彙” – 第三章 https://www.dropbox.com/s/dsaqh6xa9dp3dzw/wfst _thesis.pdf