PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractThis document provides an overview of hidden Markov Models (HMMs). It begins with some probability background, including some descriptions of algorithms used in implementing HMMs. Hidden Markov Models come from a class of systems endowed with probabilistic properties that make it useful for modeling situations in which the modeler lacks a full specification of the system in question, but has data generated by the system. This is not altogether different from a car mechanic attempting to understand an automobile motor by studying the emissions from the tailpipe and the response to acceleration, both without ever having peeked under the hood of the vehicle. To construct a theory of hidden Markov models, we first construct a theory of Markov chains; this section assumes an elementary knowledge of probability theory and univariate calculus. The subsequent two sections describe in detail two applications in particular; one in population genetics, and one in computational linguistics. This document is not meant to serve as a comprehensive text on HMMs, or on Markov models: for more technical discussion of HMMs with with many excellent examples, the reader is referred to .
Degree ProgramHonors College