Improved stochastic dynamic programming for optimal reservoir operation based on the asymptotic convergence of benefit differences.
Reservoirs -- Mathematical models.
Committee ChairDavis, Donald R.
MetadataShow full item record
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractStochastic dynamic programming has been widely used to solve for optimal operating rules for a single reservoir system. In this thesis a new iterative scheme is given which results to be more efficient in terms of computational effort than the conventional stochastic dynamic approach. The scheme is a hybrid one composed of the conventional procedure alternating with iterations over a fixed policy in order to increase the chance of finding the optimal policy more rapidly. Likewise this thesis introduces a refined technique to derive transition probability matrices and the use of bounded variables in the recursive equation to provide an easier way to verify the convergence of the cyclic gain of the system. A computer program is developed to implement the new iterative scheme and then it is applied to a real world problem in order to derive quantitative comparisons. A real savings of twenty-five percent of the computational time required with the conventional procedure is obtained with the new iterative scheme.
Degree ProgramHydrology and Water Resources