Discriminative Manifold Learning for Automatic Speech Recognition
Author | : Vikrant Tomar |
Publisher | : |
Total Pages | : |
Release | : 2016 |
ISBN-10 | : OCLC:953107364 |
ISBN-13 | : |
Rating | : 4/5 ( Downloads) |
Download or read book Discriminative Manifold Learning for Automatic Speech Recognition written by Vikrant Tomar and published by . This book was released on 2016 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: "Manifold learning techniques have received a lot of attention in recent literature. The underlying assumption of these techniques is that the high-dimensional data can be considered as a set of geometrically related points lying on or close to the surface of a smooth low-dimensional manifold embedded in the ambient space. These techniques have been used in a wide variety of application domains, such as face recognition, speaker and speech recognition. In automatic speech recognition (ASR), previous studies on this topic have primarily focused on unsupervised manifold learning techniques for dimensionality reducing feature space transformations. The goal of these techniques is to preserve the underlying manifold based geometrical relationship existing in the speech data during the transformation. However, these techniques fail to exploit the discriminative structure between the classes of speech sounds. The work in this thesis has investigated incorporating inter-class discrimination into manifold learning techniques. The contributions of this thesis work can be divided in two major categories. The first is the discriminative manifold learning (DML) techniques for dimensionality reducing feature space transformation. The second is to use the DML based constraints to regularize the training of deep neural networks (DNN). The first contribution of this thesis is a framework for DML based feature space transformations for ASR. These techniques attempt to preserve the local manifold based nonlinear relationships between feature vectors while maximizing a criterion related to separating speech classes. Two different techniques are proposed. The first is the locality preserving discriminant analysis (LPDA). In LPDA, the manifold domain relationships between feature vectors are characterized by a Euclidean distance based kernel. The second technique is the correlation preserving discriminant analysis (CPDA), which uses a cosine-correlational kernel. The LPDA and CPDA techniques are compared to two well known approaches for dimensionality reducing transformations, linear discriminant analysis (LDA) and locality preserving projection (LPP), on two separate tasks involving noise corrupted utterances of both connected digits and read newspaper text. The proposed approaches are found to provide up to 30% reductions in word error rates (WER) with respect to LDA and LPP. The second major contribution of this thesis is to apply the DML based constraints to optimize the training of DNNs for ASR. DNNs have been successfully applied to a variety of ASR tasks, both in discriminative feature extraction and hybrid acoustic modeling scenarios. Despite the rapid progress in DNN research, a number of challenges remain in training DNNs. In this part of the thesis, a manifold regularized deep neural network (MRDNN) training approach is proposed that constrains the network learning to preserve the underlying manifold based relationships between speech feature vectors. This is achieved by incorporating manifold based locality preserving constraints in the objective criterion of the network. Empirical evidence is provided to demonstrate that training a network with manifold constraints strengthens the learning of manifold based neighborhood preservation and preserves structural compactness in the hidden layers of the network. The ASR WER obtained using these networks is evaluated on a connected digits speech in noise task and a read news speech in noise task. Compared to DNNs trained without manifold constraints, the MRDNNs provides 10 to 38.64% reductions in ASR WERs. " --