Publication: Statistical Parametric Models and Inference for Biomedical Signal Processing: Applications in Speech and Magnetic Resonance Imaging
Date
2013-02-19
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Hong, Jung. 2012. Statistical Parametric Models and Inference for Biomedical Signal Processing: Applications in Speech and Magnetic Resonance Imaging. Doctoral dissertation, Harvard University.
Research Data
Abstract
In this thesis, we develop statistical methods for extracting significant information from biomedical signals. Biomedical signals are not only generated from a complex system but also affected by various random factors during their measurement. The biomedical signals may then be studied in two aspects: observational noise that biomedical signals experience and intrinsic nature that noise-free signals possess. We study Magnetic Resonance (MR) images and speech signals as applications in the one- and two-dimensional signal representation. In MR imaging, we study how observational noise can be effectively modeled and then removed. Magnitude MR images suffer from Rician-distributed signal-dependent noise. Observing that the squared-magnitude MR image follows a scaled non-central Chi-square distribution on two degrees of freedom, we optimize the parameters involved in the proposed Rician-adapted Non-local Mean (RNLM) estimator by minimizing the Chi-square unbiased risk estimate in the minimum mean square error sense. A linear expansion of RNLM's is considered in order to achieve the global optimality of the parameters without data-dependency. Parallel computations and convolution operations are considered as acceleration techniques. Experiments show the proposed method favorably compares with benchmark denoising algorithms. Parametric modelings of noise-free signals are studied for robust speech applications. The voiced speech signals are often modeled as the harmonic model with the fundamental frequency, commonly assumed to be a smooth function of time. As an important feature in various speech applications, pitch, the perceived tone, is obtained by way of estimating the fundamental frequency. In this thesis, two model-based pitch estimation schemes are introduced. In the first, an iterative Auto Regressive Moving Average technique estimates harmonically tied sinusoidal components in noisy speech signals. Dynamic programming implements the smoothness of the fundamental frequency. The second introduces the Continuous-time Voiced Speech (CVS) model, which models the smooth fundamental frequency as a linear combination of block-wise continuous polynomial bases. The model parameters are obtained via a convex optimization with constraints, providing an estimate of the instantaneous fundamental frequency. Experiments validate robustness and accuracy of the proposed methods compared with some current state-of-the-art pitch estimation algorithms.
Description
Other Available Sources
Keywords
electrical engineering
Terms of Use
Metadata Only