Browsing Hydrology & Water Resources Technical Reports by Title
Now showing items 1029 of 104

Basin Scale and Runoff Model ComplexityDistributed RainfallRunoff models are gaining widespread acceptance; yet, a fundamental issue that must be addressed by all users of these models is definition of an acceptable level of watershed discretization (geometric model complexity). The level of geometric model complexity is a function of basin and climatic scales as well as the availability of input and verification data. Equilibrium discharge storage is employed to develop a quantitative methodology to define a level of geometric model complexity commensurate with a specified level of model performance. Equilibrium storage ratios are used to define the transition from overland to channel dominated flow response. The methodology is tested on four subcatchments in the USDA ARS Walnut Gulch Experimental Watershed in Southeastern Arizona. The catchments cover a range of basins scales of over three orders of magnitude. This enabled a unique assessment of watershed response behavior as a function of basin scale. High quality, distributed, rainfall runoff data was used to verify the model (KINEROSR). Excellent calibration and verification results provided confidence in subsequent model interpretations regarding watershed response behavior. An average elementary channel support area of roughly 15% of the total basin area is shown to provide a watershed discretization level that maintains model performance for basins ranging in size from 1.5 to 631 hectares. Detailed examination of infiltration, including the role and impacts of incorporating small scale infiltration variability in a distribution sense, into KINEROSR, over a range of soils and climatic scales was also addressed. The impacts of infiltration and channel losses on runoff response increase with increasing watershed scale as the relative influence of storms is diminished in a semiarid environment such as Walnut Gulch. In this semiarid environment, characterized by ephemeral streams, watershed runoff response does not become more linear with increasing watershed scale but appears to become more nonlinear.

BAYES RISK ANALYSIS OF REGIONAL REGRESSION ESTIMATES OF FLOODSThis thesis defines a methodology for the evaluation of the worth of streamflow data using a Bayes risk approach. Using regional streamflow data in a regression analysis, the Bayes risk can be computed by considering the probability of the error in using the regionalized estimates of bridge or culvert design parameters. Cost curves for over and underestimation of the design parameter can be generated based on the error of the estimate. The Bayes risk can then be computed by integrating the probability of estimation error over the cost curves. The methodology may then be used to analyze the regional data collection effort by considering the worth of data for a record site relative to the other sites contributing to the regression equations. The methodology is illustrated by using a set of actual streamflow data from Missouri. The cost curves for over and underestimation of the streamflow design parameter for bridges and culverts are hypothesized so that the Bayes risk might be computed and the results of the analysis discussed. The results are discussed by demonstrating small sample bias that is introduced into the estimate of the design parameter for the construction of bridges and culverts. The conclusions are that the small sample bias in the estimation of large floods can be substantial and that the Bayes risk methodology can evaluate the relative worth of data when the data are used in regionalization.

BAYESIAN DECISION ANALYSIS OF A STATISTICAL RAINFALL/RUNOFF RELATIONThe first purpose of this thesis is to provide a framework for the inclusion of data from a secondary source in Bayesian decision analysis as an aid in decision making under uncertainty. A second purpose is to show that the Bayesian procedures can be implemented on a computer to obtain accurate results at little expense in computing time. The state variables of a bridge design example problem are the unknown parameters of the probability distribution of the primary data. The primary source is the annual peak flow data for the stream being spanned. Information pertinent to the choice of bridge design is contained in rainfall data from gauges on the watershed but the distribution of this secondary data cannot be directly expressed in terms of the state variables. This study shows that a linear regression equation relating the primary and secondary data provides a means of using secondary data for finding the Bayes risk and expected opportunity loss associated with any particular bridge design and single new rainfall observation. The numerical results for the example problem indicate that the information gained from the rainfall data reduces the Bayes risk and expected opportunity loss and allows for a more economical structural design. Furthermore, the careful choice of the numerical methods employed reduces the computation time for these quantities to a level acceptable to any budget.

CALIBRATION AND VALIDATION OF AQUIFER MODELSThe main aim of this study is to develop a suitable method for the calibration and validation of mathematical models of large and complex aquifer systems. Since the calibration procedure depends on the nature of the model to be calibrated and since many kinds of models are used for groundwater, the question of model choice is broached first. Various aquifer models are critically reviewed and a table to compare them as to their capabilities and limitations is set up. The need for a general calibration method for models in which the flow is represented by partial differential equations is identified from this table. The calibration problem is formulated in the general mathematical framework as the inverse problem. Five types of inverse problems that exist in modeling aquifers by partial differential equations are identified. These are, to determine (1) parameters, (2) initial conditions, (3) boundary conditions, (4) inputs, and (5) a mixture of the above. Various methods to solve these inverse problems are reviewed, including those from fields other than hydrology. A new direct method to solve the inverse problem (DIMSIP) is then developed. Basically, this method consists of transforming the partial differential equations of flow to algebraic equations by substituting in them the values of the various derivatives of the dependent variable (which may be hydraulic pressure, chemical concentration or temperature). The parameters are then obtained by formulating the problem in a nonlinear optimization framework. The method of sequential unconstrained minimization is used. Spline functions are used to evaluate the derivatives of the dependent variable. Splines are functions defined by piecewise polynomial arcs in such a way that derivatives up to and including the order one less than the degree of polynomials used are continuous everywhere. The natural cubic splines used in this study have the additional property of minimum curvature which is analogous to minimum energy surface. These and the derivative preserving properties of splines make them an excellent tool for approximating the dependent variable surfaces in groundwater flow problems. Applications of the method to both a test situation as well as to real world data are given. It is shown that the method evaluates the parameters, boundary conditions and inputs; that is, solves inverse problem type V. General conditions of heterogeneity and anisotropy can be evaluated. However, the method is not applicable to steady flows and has the limitation that flow models in which the parameters are functions of the dependent variable cannot be calibrated. In addition, at least one of the parameters has to be preassigned a value. A discussion of uncertainties in calibration procedures is given. The related problems of model validation and sampling of aquifers are also discussed.

CALIBRATION OF RAINFALLRUNOFF MODELS USING GRADIENTBASED ALGORITHMS AND ANALYTIC DERIVATIVESIn the past, derivativebased optimization algorithms have not frequently been used to calibrate conceptual rainfall riff (CRR) models, partially due to difficulties associated with obtaining the required derivatives. This research applies a recently developed technique of analytically computing derivatives of a CRR model to a complex, widely used CRR model. The resulting least squares response surface was found to contain numerous discontinuities in the surface and derivatives. However, the surface and its derivatives were found to be everywhere finite, permitting the use of derivative based optimization algorithms. Finite difference numeric derivatives were computed and found to be virtually identical to analytic derivatives. A comparison was made between gradient (Newton Raphsoz) and direct (pattern search) optimization algorithms. The pattern search algorithm was found to be more robust. The lower robustness of the NewtonRaphsoi algorithm was thought to be due to discontinuities and a rough texture of the response surface.

Characterization of aquifer heterogeneity using transient hydraulic tomographyHydraulic tomography is a cost effective technique for characterizing the heterogeneity of hydraulic parameters in the subsurface. During hydraulic tomography surveys, a large number of hydraulic heads (i.e., aquifer responses) are collected from a series of pumping or injection tests in an aquifer. These responses are then used to interpret the spatial distribution of hydraulic parameters of the aquifer using inverse modeling. In this study, we developed an efficient sequential successive linear estimator (SSLE) for interpreting data from transient hydraulic tomography to estimate threedimensional hydraulic conductivity and specific storage fields of aquifers. We first explored this estimator for transient hydraulic tomography in a hypothetical onedimensional aquifer. Results show that during a pumping test, transient heads are highly correlated with specific storage at early time but with hydraulic conductivity at late time. Therefore, reliable estimates of both hydraulic conductivity and specific storage must exploit the head data at both early and late times. Our study also shows that the transient heads are highly correlated over time, implying only infrequent head measurements are needed during the estimation. Applying this sampling strategy to a well posed problem, we show that our SSLE can produce accurate estimates of both hydraulic conductivity and specific storage fields. The benefit of hydraulic tomography for ill posed problems is then demonstrated. Finally, to affirm the robustness of our SSLE approach, we apply the SSLE approach to transient hydraulic tomography in a hypothetical two dimensional aquifer with nonstationary hydraulic properties, as well as a hypothetical threedimensional heterogeneous aquifer.

COLLECTIVE ADJUSTMENT OF THE PARAMETERS OF THE MATHEMATICAL MODEL OF A LARGE AQUIFERThe problem of evaluating the parameters of the mathematical model of an unconfined aquifer is examined with a view toward development of automated or computer aided methods. A formulation is presented in which subjective confidence ranges for each of the model parameters are quantified and entered into an objective function as linear penalty functions. Parameters are then adjusted by a procedure which seeks to reduce the model error to acceptable limits. A digital computer model of the Tucson basin aquifer is adapted and used to illustrate the concepts and demonstrate the method.

COLLECTIVE UTILITY IN THE MANAGEMENT OF NATURAL RESOURCES: A SYSTEMS APPROACHThe main purpose of this report is to develop an economic theory, along the lines of the Bergson Samuelson social welfare theory, to regulate the utilization of natural resources in the long term interest of a political economic group of individuals and firms. The theory, called Collective Utility, qualifies as a "systems approach" because of its inherent flexibility, generality, and comprehensiveness. Collective Utility is a function of individual satisfactions and firm revenues, which are, in general, contingent upon the actions of other individuals and /or firms. Such interactions are called externalities. It is the contention of this report that efficient management of natural resources will follow from efficient control of externalities. A taxation  subsidy structure is suggested as an efficient control and the complete mathematics of determining and implementing such a structure are provided. Finally, the idea of externalities is integrated within the framework of Collective Utility to form an optimal policy for the utilization of natural resources using the techniques of calculus of variations.

COLORADO RIVER TRIPS WITHIN THE GRAND CANYON NATIONAL PARK AND MONUMENT: A SOCIOECONOMIC ANALYSISThe recreational use of the Colorado River within the Grand Canyon National Park and National Monument increased on the order of 60 to 70 per cent during each year of the interval 1967 to 1970. Consequently, the U. S. National Park Service instituted user limits to protect and preserve the area commencing with the 1971 season. This limit was established with limited data on the users of the river or about their perceptions of the trip experience. A need existed to collect and analyze this type of data, and to suggest possible management alternatives. This study used a mailed questionnaire to a random sample of past participants in order to collect basic socioeconomic data. The analysis was based on a 65% response rate, and consisted of individual question tabulation and multivariate data cluster analysis. The data show background characteristics of the participants, reasons for taking the trip, reactions to the experience, perceptions of problems associated with the trips, reactions to crowded conditions, and needs for regulatory policy concerning user intensities.

CONFUSION WHERE GROUND AND SURFACE WATERS MEET: GILA RIVER GENERAL ADJUDICATION, ARIZONA AND THE SEARCH FOR SUBFLOWArizona is presently in the midst of a general adjudication for the Gila River system  the watershed which comprises the southern two thirds of the state. The purpose of the adjudication is to prioritize all water claims in the river system: both state established and federally reserved rights. Arizona adheres to a bifurcated (or divided) system of water law which only recognizes a component of ground water  called subflow  to be appropriable. Wells which pump nonappropriable water  called tributary flow  are not to be included in the adjudication. The problem is that federal laws do not recognize this artificial bifurcation. The challenge lies in identifying a subflow zone which satisfies the hydrologic fiction of existing state precedents and the hydrologic reality of federal statutes. At the core of the problem lies the fate of Arizona's perennial stream water and the fulfillment of federally reserved tribal water rights. Thus, larger questions loom: can Arizona law reconcile its glutinous past with a water scarce future, will the adjudication ever reach a finality, and even if it does, will it be a finality that all sides can live with?

A CONTINUOUS REVIEW INVENTORY MODEL FOR IRRIGATION WATER APPLICATIONThis thesis is concerned with the problem of determining an optimal irrigation policy, that is, an optimal quantity and frequency of irrigation water application. The purpose is to present a solution to this problem using a continuous review model of an inventory system. Initially, the functions of the plant water soil system are discussed. This is followed by a review of several existing methods for maximizing crop yield or profit by determining an optimal irrigation policy. Next, the inventory problem is briefly examined. An analogy is drawn between the farmer's problem of determining an optimal irrigation policy and the businessman's problem of determining an optimal ordering policy. Subsequently, a continuous review model of the irrigation system is developed and an example of its use is given.

A COSTEFFECTIVENESS STUDY AND ANALYSIS OF MUNICIPAL REFUSE DISPOSAL SYSTEMSThe comparison of alternative systems of disposing efficiently and effectively of four to five pounds of solid waste per person per day in the United States urban communities is undertaken by using Kazanowski's standardized cost effectiveness methodology. The economic criteria for studying this problem are often limited to cost or marketable measures; in contrast, use of a cost effectiveness approach allows the inclusion of non quantifiable measures of effectiveness such as public acceptance, politics, health risks, environmental considerations, and soil benefits. Data from a case study in Tucson, Arizona, is used to illustrate the problem.

COUPLING STOCHASTIC AND DETERMINISTIC HYDROLOGIC MODELS FOR DECISIONMAKINGMany planning decisions related to the land phase of the hydrologic cycle involve uncertainty due to stochasticity of rainfall inputs and uncertainty in state and knowledge of hydrologic processes. Consideration of this uncertainty in planning requires quantification in the form of probability distributions. Needed probability distributions, for many cases, must be obtained by transforming distributions of rainfall input and hydrologic state through deterministic models of hydrologic processes. Probability generating functions are used to derive a recursive technique that provides the necessary probability transformation for situations where the hydrologic output of interest is the cumulative effect of a random number of stochastic inputs. The derived recursive technique is observed to be quite accurate from a comparison of probability distributions obtained independently by the recursive technique and an exact analytic method for a simple problem that can be solved with the analytic method. The assumption of Poisson occurrence of rainfall events, which is inherent in derivation of the recursive technique, is examined and found reasonable for practical application. Application of the derived technique is demonstrated with two important hydrology related problems. It is first demonstrated for computing probability distributions of annual direct runoff from a watershed, using the USDA Soil Conservation Service (SCS direct runoff model and stochastic models for rainfall event depth and watershed state. The technique is also demonstrated for obtaining probability distributions of annual sediment yield. For this demonstration, thedeterministic transform model consists of a parametric event based sediment yield model and the SCS models for direct runoff volume and peak flow rate. The stochastic rainfall model consists of a marginal Weibull distribution for rainfall event duration and a conditional log normal distribution for rainfall event depth, given duration. The stochastic state model is the same as used for the direct runoff application. Probability distributions obtained with the recursive technique for both the direct runoff and sediment yield demonstration examples appear to be reasonable when compared to available data. It is, therefore, concluded that the recursive technique, derived from probability generating functions, is a feasible transform method that can be useful for coupling stochastic models of rainfall input and state to deterministic models of hydrologic processes to obtain probability distributions of outputs where these outputs are cumulative effects of random numbers of stochastic inputs.

Deciding to RechargePublic water policy decision making tends to be too complex and dynamic to be described fully by traditional, rational models. Information intended to improve decisions often is rendered ineffective by a failure to understand the process. An alternative, holistic description of how such decisions actually are made is presented here and illustrated with a case study. The role of information in the process is highlighted. Development of a Regional Recharge Plan for Tucson, Arizona is analyzed as the case study. The description of how decisions are made is based on an image of public water policy decision making as 1) a structured, nested network of individuals and groups with connections to their environment through their senses, mediated by their knowledge; and 2) a nonlinear process in which decisions feed back to affect the preferences and intentions of the people involved, the structure of their interactions, and the environment in which they operate. The analytical components of this image are 1) the decision makers, 2) the relevant features of their environment, 3) the structure of their interactions, and 4) the products or outputs of their deliberations. Policy decisions analyzed by these components, in contrast to the traditional analysis, disclose a new set of relationships and suggest a new view of the uses of information. In context of information use, perhaps the most important output of the decision process is a shared interpretation of the policy issue. This interpretation sets the boundaries of the issue and the nature of issuerelevant information. Participants are unlikely to attend to information incompatible with the shared interpretation. Information is effective when used to shape the issue interpretation, fill specific gaps identified as issuerelevant during the process, rationalize choices, and reshape the issue interpretation as the issue environment evolves.

Decision Making Under Uncertainty in Systems HydrologyDesign of engineering projects involve a certain amount of uncertainty. How should design decisions be taken in face of the uncertainty? What is the most efficient way of handling the data? Decision theory can provide useful answers to these questions. The literature review shows that decision theory is a fairly well developed decision method, with almost no application in hydrology. The steps of decision theoretic analysis are given. They are augmented by the concept of expected expected opportunity loss, which is developed as a means of measuring the expected value of additional data before they are received. The method is applied to the design of bridge piers and flood levees for Rillito Creek, Pima County, Arizona. Uncertainty in both the mean and the variance of the logarithms of the peak flows of Rillito Creek is taken into account. Also shown are decision theoretic methods for: 1) handling secondary data, such as obtained from a regression relation, 2) evaluating the effect of the use of non  sufficient statistics, 3) considering alternate models and 4) regionalizing data.It is concluded that decision theory provides a rational structure for making design decisions and for the associated data collection and handling problems.

DESIGN OF WATER RESOURCES SYSTEMS IN DEVELOPING COUNTRIES: THE LOWER MEKONG BASINThis study focuses on the design of water resources systems in developing nations with particular reference to the development of water resources in the Lower Mekong Basin (Khmer Republic, Laos, Thailand, and Republic of South Viet Nam). The determination of the "best" system in terms of social goals reflecting the economic and social environment of the Mekong countries is the main issue of this dissertation. The imperfection of the usual technique for planning water resources systems, namely, cost benefit analysis, leads to the use of the standardized cost effectiveness methodology. To illustrate how the design is accomplished, two distinctly different structural alternatives of possible development in the Lower Mekong Basin are defined. The design process starts from the statements of goals or objectives of water resources development, which are then mapped onto specifications sets in which social needs are represented. Next, the capabilities of alternative systems are determined through simulation in which three 50 year sequences of synthetic streamflow are generated by a first order autoregressive scheme. The two alternatives are then compared using both quantitative and qualitative criteria. To illustrate how a decision in selecting an alternative system could be reached, ranking of criteria by order of preference is demonstrated. With the choice of either a fixed cost or fixed effectiveness approach, the decision to select the best alternative system could be made. At this point, the use of a weighting technique, which is a common fallacy of systems analysis, will be automatically eliminated. The study emphasizes that a systematic design procedure of water resources systems is provided by the standardized cost effectiveness approach, which possesses several advantages. The approach will suggest and help identify the system closest to meeting the desired economic and social goals of the developing countries in the Lower Mekong Basin. In this connection, the approach will help governments in the preparation of programming and budgeting of capital for further investigations and investments. It is believed that the approach will eliminate unnecessary expenses in projects that are planned on an individual basis or by methods used at present. Further, the approach provides an appropriate mechanism for generating essential information in the decision process. Both quantifiable and non quantifiable criteria are fully considered. The choice of a fixed cost or fixed effectiveness approach will determine the trade off between these criteria. The study recognizes that research to determine appropriate hydrologic models for monthly streamfiow generation for tributary projects in the Basin is necessary. This leads to another important area of research which is to find the appropriate number of monthly sequences of streamflow to be generated in relation to number of states and decision variables. Research on the design of computer experiments is necessary to improve simulation as a tool to estimate the quantitative effects of a given project.

DEVELOPMENT AND VALIDATION OF A NEW MAXIMUM LIKELIHOOD CRITERION SUITABLE FOR DATA COLLECTED AT UNEQUAL TIME INTERVALSA new Maximum Likelihood Criterion (MLE) suitable for data which are recorded at unequal time intervals and contain autocorrelated errors is developed. Validation of the new MLE criterion has been carried out both on a simple two  parameter reservoir model using synthetical data and on a more complicated hillslope model using real data from the Pukeiti Catchment in New Zealand. Comparison between the new MLE criterion and the Simple Least Squares (SLS) criterion reveals the superiority of the former over the latter. Comparison made between the new MLE and the MLE for autocorrelated case proposed by Sorooshian in 1978 has shown that both criteria would yield results with no practical difference if equal time interval data were used. However, the new MLE can work on variable time interval data which provide more information than equal time interval data, and therefore produces better visual results in hydrologic simulations.

A Distributed Surface Temperature and Energy Balance Model of a SemiArid WatershedA simple model of surface and sub surface soil temperature was developed at the watershed scale ( 100 km2) in a semi arid rangeland environment. The model consisted of a linear combination of air temperature and net radiation and assumed: 1) topography controls the spatial distribution of net radiation, 2) near surface air temperature and incoming solar radiation are relatively homogeneous at the watershed scale and are available from ground stations and 3) soil moisture dominates transient soil thermal property variability. Multiplicative constants were defined to account for clear sky diffuse radiation, soil thermal inertia, an initially fixed ratio between soil heat flux and net radiation and exponential attenuation of solar radiation through a partial canopy. The surface temperature can optionally be adjusted for temperature and emissivity differences between mixed hare soil and vegetation canopies. Model development stressed physical simplicity and commonly available spatial and temporal data sets. Slowly varying surface characteristics, such as albedo, vegetation density and topography were derived from a series of Landsat TM images and a 7.5" USGS digital elevation model at a spatial resolution of 30 m. Diurnally variable atmospheric parameters were derived from a pair of ground meteorological stations using 30 60 min averages. One site was used to drive the model, the other served as a control to estimate model error. Data collected as part of the Monsoon '90 and WG '92 field experiments over the ARS Walnut Gulch Experimental. Watershed in SE Arizona were used to validate and test the model. Point, transect and spatially distributed values of modeled surface temperature were compared with synchronous ground, aircraft and satellite thermal measurements. There was little difference between ground and aircraft measurements of surface reflectance and temperature which makes aircraft transects the preferred method to "ground truth" satellite observations. Mid morning modeled surface temperatures were within 2° C of observed values at all but satellite scales, where atmospheric water vapor corrections complicate the determination of accurate temperatures. The utility of satellite thermal measurements and models to study various ground phenomena (eg. soil thermal inertia and surface energy balance) were investigated. Soil moisture anomalies were detectable, but were more likely associated with average near surface soil moisture levels than individual storm footprints.

DPRCI /GAUSS A Program to Calculate Reservoir Yield Curves Using a Dynamic Programming Reservoir Operation AlgorithmThis report presents a computer program which will calculate reservoir yield curves for reservoir operation policy based on optimization accomplished using a. dynamic programming algorithm. After discussion of the dynamic programming and Gauss elimination algorithms, the input requirements, execution procedures, source code presentation, sample output, and references are presented. The programs run in a relatively short time on an 80286 personal computer. They are written in FORTRAN7.7 and compiled using the LAHEY F776 compiler.