• Aquifer Modeling by Numerical Methods Applied to an Arizona Groundwater Basin

      Fogg, Graham E.; Simpson, Eugene S.; Neuman, Shlomo P. (Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1979-06)
      FLUMP, a recently developed mixed explicit -implicit finite -element program, was calibrated against a data base obtained from a portion of the Tucson Basin aquifer, Arizona, and represents its first application to a real -world problem. Two previous models for the same region were constructed (an electric analog and a finite -difference model) in which calibration was based on prescribed flux boundary conditions along stream courses and mountain fronts. These fluxes are not directly measured and estimates are subject to large uncertainties. In contrast, boundary conditions used in the calibration of FLUMP were prescribed hydraulic heads obtained from direct measurement. At prescribed head boundaries FLUMP computed time - varying fluxes representing subsurface lateral flow and recharge along streams. FLUMP correctly calculated fluctuations in recharge along the Santa Cruz River due to fluctuations in storm runoff and sewage effluent release rates. FLUMP also provided valuable insight into distributions of recharge, discharge, and subsurface flow in the study area.Properties of FLUMP were compared with those of two other programs in current use: ISOQUAD, a finite -element program developed by Pinder and Frind (1972), and a finite- difference program developed by the U.S. Geological Survey (Trescott, et al., 1976). It appears that FLUMP can handle a larger class of problems than the other two programs, including those in which the boundary conditions and aquifer parameters vary arbitrarily with time and /or head. FLUMP also has the ability to solve explicitly when accuracy requires small time steps, the ability to solve explicitely in certain parts of the flow region while solving implicitly in other parts, flexibility in mesh design and numbering of nodes, computation of internal as well as external fluxes, and global as well as local mass balance checks at each time step.

      Sagar, Budhi,1943-; Department of Hydrology & Water Resources, The University of Arizona (Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1973-06)
      The main aim of this study is to develop a suitable method for the calibration and validation of mathematical models of large and complex aquifer systems. Since the calibration procedure depends on the nature of the model to be calibrated and since many kinds of models are used for groundwater, the question of model choice is broached first. Various aquifer models are critically reviewed and a table to compare them as to their capabilities and limitations is set up. The need for a general calibration method for models in which the flow is represented by partial differential equations is identified from this table. The calibration problem is formulated in the general mathematical framework as the inverse problem. Five types of inverse problems that exist in modeling aquifers by partial differential equations are identified. These are, to determine (1) parameters, (2) initial conditions, (3) boundary conditions, (4) inputs, and (5) a mixture of the above. Various methods to solve these inverse problems are reviewed, including those from fields other than hydrology. A new direct method to solve the inverse problem (DIMSIP) is then developed. Basically, this method consists of transforming the partial differential equations of flow to algebraic equations by substituting in them the values of the various derivatives of the dependent variable (which may be hydraulic pressure, chemical concentration or temperature). The parameters are then obtained by formulating the problem in a nonlinear optimization framework. The method of sequential unconstrained minimization is used. Spline functions are used to evaluate the derivatives of the dependent variable. Splines are functions defined by piecewise polynomial arcs in such a way that derivatives up to and including the order one less than the degree of polynomials used are continuous everywhere. The natural cubic splines used in this study have the additional property of minimum curvature which is analogous to minimum energy surface. These and the derivative preserving properties of splines make them an excellent tool for approximating the dependent variable surfaces in groundwater flow problems. Applications of the method to both a test situation as well as to real -world data are given. It is shown that the method evaluates the parameters, boundary conditions and inputs; that is, solves inverse problem type V. General conditions of heterogeneity and anisotropy can be evaluated. However, the method is not applicable to steady flows and has the limitation that flow models in which the parameters are functions of the dependent variable cannot be calibrated. In addition, at least one of the parameters has to be preassigned a value. A discussion of uncertainties in calibration procedures is given. The related problems of model validation and sampling of aquifers are also discussed.

      Hendrickson, Jene Diane,1960-; Sorooshian, Soroosh; Department of Hydrology & Water Resources, The University of Arizona (Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1987-05)
      In the past, derivative-based optimization algorithms have not frequently been used to calibrate conceptual rainfall -riff (CRR) models, partially due to difficulties associated with obtaining the required derivatives. This research applies a recently- developed technique of analytically computing derivatives of a CRR model to a complex, widely -used CRR model. The resulting least squares response surface was found to contain numerous discontinuities in the surface and derivatives. However, the surface and its derivatives were found to be everywhere finite, permitting the use of derivative -based optimization algorithms. Finite difference numeric derivatives were computed and found to be virtually identical to analytic derivatives. A comparison was made between gradient (Newton- Raphsoz) and direct (pattern search) optimization algorithms. The pattern search algorithm was found to be more robust. The lower robustness of the Newton-Raphsoi algorithm was thought to be due to discontinuities and a rough texture of the response surface.

      Lovell, Robert E.; Department of Hydrology & Water Resources, The University of Arizona (Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1971-06)
      The problem of evaluating the parameters of the mathematical model of an unconfined aquifer is examined with a view toward development of automated or computer -aided methods. A formulation is presented in which subjective confidence ranges for each of the model parameters are quantified and entered into an objective function as linear penalty functions. Parameters are then adjusted by a procedure which seeks to reduce the model error to acceptable limits. A digital computer model of the Tucson basin aquifer is adapted and used to illustrate the concepts and demonstrate the method.
    • Improving the Reliability of Compartmental Models: Case of Conceptual Hydrologic Rainfall-Runoff Models

      Sorooshian, Soroosh; Gupta, Vijai Kumar; Department of Hydrology & Water Resources, The University of Arizona (Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1986-08)

      Gates, Joseph Spencer; Department of Hydrology & Water Resources, The University of Arizona (Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1972-06)
      Two digital- computer models of the ground -water reservoir of the Tucson basin, in south - central Arizona, were constructed to study errors in digital models and to evaluate the worth of additional basic data to models. The two models differ primarily in degree of detail -- the large -scale model consists of 1,890 nodes, at a 1/2 -mile spacing; and the small -scale model consists of 509 nodes, at a l -mile spacing. Potential errors in the Tucson basin models were classified as errors associated with computation, errors associated with mathematical assumptions, and errors in basic data: the model parameters of coefficient of storage and transmissivity, initial water levels, and discharge and recharge. The study focused on evaluating the worth of additional basic data to the small -scale model. A basic form of statistical decision theory was used to compute expected error in predicted water levels and expected worth of sample data (expected reduction in error) over the whole model associated with uncertainty in a model variable at one given node. Discrete frequency distributions with largely subjectively- determined parameters were used to characterize tested variables. Ninety -one variables at sixty - one different locations in the model were tested, using six separate error criteria. Of the tested variables, 67 were chosen because their expected errors were likely to be large and, for the purpose of comparison, 24 were chosen because their expected errors were not likely to be particularly large. Of the uncertain variables, discharge /recharge and transmissivity have the largest expected errors (averaging 155 and 115 feet, respectively, per 509 nodes for the criterion of absolute value of error) and expected sample worths (averaging 29 and 14 feet, respectively, per 509 nodes). In contrast, initial water level and storage coefficient have lesser values. Of the more certain variables, transmissivity and initial water level generally have the largest expected errors (a maximum of 73 per feet per 509 nodes) and expected sample worths (a maximum of 12 feet per 509 nodes); whereas storage coefficient and discharge/ recharge have smaller values. These results likely are not typical of those from many ground -water basins, and may apply only to the Tucson basin. The largest expected errors are associated with nodes at which values of discharge /recharge are large or at which prior estimates of transmissivity are very uncertain. Large expected sample worths are associated with variables which have large expected errors or which could be sampled with relatively little uncertainty. Results are similar for all six of the error criteria used. Tests were made of the sensitivity of the method to such simplifications and assumptions as the type of distribution function assumed for a variable, the values of the estimated standard deviations of the distributions, and the number and spacing of the elements of each distribution. The results are sensitive to all of the assumptions and therefore likely are correct only in order of magnitude. However, the ranking of the types of variables in terms of magnitude of expected error and expected sample worth is not sensitive to the assumptions, and thus the general conclusions on relative effects of errors in different variables likely are valid. Limited studies of error propagation indicated that errors in predicted water levels associated with extreme erroneous values of a variable commonly are less than 4 feet per node at a distance of 1 mile from the tested node. This suggests that in many cases, prediction errors associated with errors in basic data are not a major problem in digital modeling.