Show simple item record

dc.contributor.advisorSanders, William H.en_US
dc.contributor.authorQureshi, Muhammad Akber, 1964-
dc.creatorQureshi, Muhammad Akber, 1964-en_US
dc.date.accessioned2013-04-03T13:15:42Z
dc.date.available2013-04-03T13:15:42Z
dc.date.issued1992en_US
dc.identifier.urihttp://hdl.handle.net/10150/278184
dc.description.abstractReward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving these reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, specification of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via randomization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are specified at the SAN level, and solved in a single model. Furthermore, a novel method of discarding trajectories of low probabilities with algorithms to compute bounds on the injected error is proposed. The methodology is presented, together with the results on the time and space efficiency of a particular implementation.
dc.language.isoen_USen_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectComputer Science.en_US
dc.titleReward model solution methods with impulse and rate rewards: An algorithm and numerical resultsen_US
dc.typetexten_US
dc.typeThesis-Reproduction (electronic)en_US
thesis.degree.grantorUniversity of Arizonaen_US
thesis.degree.levelmastersen_US
dc.identifier.proquest1349471en_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.nameM.S.en_US
dc.identifier.bibrecord.b27698841en_US
refterms.dateFOA2018-04-26T13:24:40Z
html.description.abstractReward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving these reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, specification of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via randomization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are specified at the SAN level, and solved in a single model. Furthermore, a novel method of discarding trajectories of low probabilities with algorithms to compute bounds on the injected error is proposed. The methodology is presented, together with the results on the time and space efficiency of a particular implementation.


Files in this item

Thumbnail
Name:
azu_td_1349471_sip1_m.pdf
Size:
1.738Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record