UNI-MB - logo
UMNIK - logo
 
E-resources
Full text
  • Hashimoto, Kei; Oura, Keiichiro; Nankaku, Yoshihiko; Tokuda, Keiichi

    2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 03/2016
    Conference Proceeding, Journal Article

    This paper proposes a new training method of deep neural networks (DNNs) for statistical parametric speech synthesis. DNNs are recently used as acoustic models that represent mapping functions from linguistic features to acoustic features in statistical parametric speech synthesis. There are problems to be solved in conventional DNN-based speech synthesis: 1) the inconsistency between the training and synthesis criteria; and 2) the over-smoothing of the generated parameter trajectories. In this paper, we introduce the parameter trajectory generation process considering the global variance (GV) into the training of DNNs. A unified framework which consistently uses the same criterion in both training and synthesis can be obtained and the model parameters are optimized for parameter generation considering the GV in the proposed method. Experimental results show that the proposed method outperforms the conventional method in the naturalness of synthesized speech.