Show simple item record

dc.contributor.advisorGreenlee, Wilfred M.en_US
dc.contributor.authorAvila Godoy, Micaela Guadalupe
dc.creatorAvila Godoy, Micaela Guadalupeen_US
dc.date.accessioned2013-05-09T09:27:37Z
dc.date.available2013-05-09T09:27:37Z
dc.date.issued1999en_US
dc.identifier.urihttp://hdl.handle.net/10150/289049
dc.description.abstractControlled Markov chains (CMC's) are mathematical models for the control of sequential decision stochastic systems. Starting in the early 1950's with the work of R. Bellman, many basic contributions to CMC's have been made, and numerous applications to engineering, operation research, and economics, among other areas, have been developed. The optimal control problem for CMC's with a countable state space, and with a general action space, is studied for (exponential) total and discounted risk-sensitive cost criteria. General (dynamic programming) results for the finite and the infinite horizon cases are obtained. A set of general conditions is presented to obtain structural properties of the optimal value function and policies. In particular, monotonicity properties of value functions and optimal policies are established. The approach followed is to show the (sub)modularity of certain functions (related to the optimality equations). Four applications studies are used to illustrate the general results obtained is this dissertation: equipment replacement, optimal resource allocation, scheduling of uncertain jobs, and inventory control.
dc.language.isoen_USen_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.subjectMathematics.en_US
dc.titleControlled Markov chains with exponential risk-sensitive criteria: Modularity, structured policies and applicationsen_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
thesis.degree.grantorUniversity of Arizonaen_US
thesis.degree.leveldoctoralen_US
dc.identifier.proquest9957946en_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineMathematicsen_US
thesis.degree.namePh.D.en_US
dc.identifier.bibrecord.b40137600en_US
refterms.dateFOA2018-08-29T06:20:01Z
html.description.abstractControlled Markov chains (CMC's) are mathematical models for the control of sequential decision stochastic systems. Starting in the early 1950's with the work of R. Bellman, many basic contributions to CMC's have been made, and numerous applications to engineering, operation research, and economics, among other areas, have been developed. The optimal control problem for CMC's with a countable state space, and with a general action space, is studied for (exponential) total and discounted risk-sensitive cost criteria. General (dynamic programming) results for the finite and the infinite horizon cases are obtained. A set of general conditions is presented to obtain structural properties of the optimal value function and policies. In particular, monotonicity properties of value functions and optimal policies are established. The approach followed is to show the (sub)modularity of certain functions (related to the optimality equations). Four applications studies are used to illustrate the general results obtained is this dissertation: equipment replacement, optimal resource allocation, scheduling of uncertain jobs, and inventory control.


Files in this item

Thumbnail
Name:
azu_td_9957946_sip1_m.pdf
Size:
2.596Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record