• Login
    Search 
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Search
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UA Campus RepositoryCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalThis CommunityTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournal

    My Account

    LoginRegister

    Filter by Category

    Discipline
    Graduate College (194)
    Hydrology and Water Resources (194)
    AuthorsNeuman, Shlomo P. (28)Ince, Simon (26)Evans, Daniel D. (25)Sorooshian, Soroosh (25)Simpson, Eugene S. (23)Davis, Donald R. (20)Harshbarger, John W. (19)Warrick, Arthur W. (17)Davis, Stanley N. (14)Maddock, Thomas (14)View MoreTypes
    Dissertation-Reproduction (electronic) (194)
    text (194)

    About

    AboutUA Faculty PublicationsUA DissertationsUA Master's ThesesUA Honors ThesesUA PressUA YearbooksUA Catalogs

    Statistics

    Display statistics
     

    Search

    Show Advanced FiltersHide Advanced Filters

    Filters

    Now showing items 1-10 of 194

    • List view
    • Grid view
    • Sort Options:
    • Relevance
    • Title Asc
    • Title Desc
    • Issue Date Asc
    • Issue Date Desc
    • Results Per Page:
    • 5
    • 10
    • 20
    • 40
    • 60
    • 80
    • 100

    • 194CSV
    • 194RefMan
    • 194EndNote
    • 194BibTex
    • Selective Export
    • Select All
    • Help
    Thumbnail

    Worth of data used in digital-computer models of ground-water basins.

    Gates, Joseph Spencer (The University of Arizona., 1972)
    wo digital-computer models of the ground-water reservoir of the Tucson basin, in south-central Arizona, were constructed to study errors in digital models and to evaluate the worth of additional basic data to models. The two models differ primarily in degree of detail -- the large-scale model consists of 1,890 nodes, at a 1/2-mile spacing; and the small-scale model consists of 509 nodes, at a 1-mile spacing. Potential errors in the Tucson basin models were classified as errors associated with computation, errors associated with mathematical assumptions, and errors in basic data: the model parameters of coefficient of storage and transmissivity, initial water levels, and discharge and recharge. The study focused on evaluating the worth of additional basic data to the small-scale model. A, basic form of statistical decision theory was used to compute expected error in predicted water levels and expected worth of sample data (expected reduction in error) over the whole model associated with uncertainty in a model variable at one given node. Discrete frequency distributions with largely subjectively-determined parameters were used to characterize tested variables. Ninety-one variables at sixtyone different locations in the model were tested, using six separate error criteria. Of the tested variables, 67 were chosen because their expected errors were likely to be large and, for the purpose of comparison, 24 were Chosen because their expected errors were not likely to be particularly large. Of the uncertain variables, discharge/recharge and transmissivity have the largest expected errors (averaging 155 and 115 feet, respectively, per 509 nodes for the criterion of absolute value of error) and expected sample worths (averaging 29 and 14 feet, respectively, per 509 nodes). In contrast, initial water level and storage coefficient have lesser values. Of the more certain variables, transmissivity and initial water level generally have the largest expected errors (a maximum of 73 per feet per 509 nodes) and expected sample worths (a maximum of 12 feet per 509 nodes); whereas storage coefficient and discharge/ recharge have smaller values. These results likely are not typical of those from many ground-water basins, and may apply only to the Tucson basin. The largest expected errors are associated with nodes at which values of discharge/recharge are large or at which prior estimates of transudssivity are very uncertain. Large expected sample worths are associated with variables which have large expected errors or which could be sampled with relatively little uncertainty. Results are similar for all six of the error criteria used. Tests were made of the sensitivity of the method to such simplifications and assumptions as the type of distribution function assumed for a variable, the values of the estimated standard deviations of the distributions, and the number and spacing of the elements of each distribution. The results are sensitive to all of the assumptions and therefore likely are correct only in order of magnitude. However, the ranking of the types of variables in terms of magnitude of expected error and expected sample worth is not sensitive to the assumptions, and thus the general conclusions on relative effects of errors in different variables likely are valid. Limited studies of error propagation indicated that errors in predicted water levels associated with extreme erroneous values of a variable commonly are less than 4 feet per node at a distance of 1 mile from the tested node. This suggests that in many cases, prediction errors associated with errors in basic data are not a major problem in digital modeling.
    Thumbnail

    A stochastic approach to space-time modeling of rainfall.

    Gupta, Vijay K.(Vijay Kumar),1946- (The University of Arizona., 1973)
    This study gives a phenomenologically based stochastic model of space-time rainfall. Specifically, two random variables on the spatial rainfall, e.g., the cumulative rainfall within a season and the maximum cumulative rainfall per rainfall event within a season are considered. An approach is given to determine the cumulative distribution function (c.d.f.) of the cumulative rainfall per event, based on a particular random structure of space-time rainfall. Then the first two moments of the cumulative seasonal rainfall are derived based on a stochastic dependence between the cumulative rainfall per event and the number of rainfall events within a season. This stochastic dependence is important in the context of the spatial rainfall process. A theorem is then proved on the rate of convergence of the exact c.d.f. of the seasonal cumulative rainfall up to the iᵗʰ year, i ≥ 1, to its limiting c.d.f. Use of the limiting c.d.f. of the maximum cumulative rainfall per rainfall event up to the iᵗʰ year within a season is given in the context of determination of the 'design rainfall'. Such information is useful in the design of hydraulic structures. Special mathematical applications of the general theory are developed from a combination of empirical and phenomenological based assumptions. A numerical application of this approach is demonstrated on the Atterbury watershed in the Southwestern United States.
    Thumbnail

    Evaluation of unconfined aquifer parameters using a successive line relaxation finite difference model.

    Rebuck, Ernest Charles,1944- (The University of Arizona., 1972)
    A finite difference model was developed specifically for analyzing the Grand Island, Nebraska aquifer test. Time-drawdown data for the aquifer test were fitted by least squares to an exponential type equation. To facilitate calibration of the model, interpolated distance-drawdown profiles also were fitted to an exponential type equation. The treatment of aquifer boundaries and the assumption of isotropic aquifer conditions affected the model computed water table profile. The effect was significant enough as to defy making accurate estimates of saturated hydraulic conductivity and specific yield. When the analysis was extended to long time periods of discharge, problems with the boundaries, particularly the distance to the lateral constant head boundary, led to unrealistic estimates of pumping level. The finite difference technique has its greatest application as a research method for analyzing short-duration aquifer tests provided that the aquifer conditions are well defined, measurements of pumping level are available and drawdown measurements have been secured for at least two observation wells within close proximity of the discharge well. Because of difficulties in maintaining convergence and model stability, the finite difference model reviewed in this study is too cumbersome to be considered a practical, field method for the analysis of unconfined aquifer parameters.
    Thumbnail

    Constructed wetlands and soil-aquifer treatment systems: Effects on the character of effluent organic matter

    Quanrud, David Matson (The University of Arizona., 2000)
    Within the context of potable reuse, there is a need for a more comprehensive examination of the quality of dissolved organic matter (DOM) in treated wastewater and the efficacy of different treatment schemes in removing or transforming DOM. In particular, there are significant information gaps regarding the character, fate, and health risks associated with effluent organic matter (EfOM). Two research goals guided this research. The first goal was to evaluate the efficacy of constructed wetlands for wastewater polishing in a hot, arid environment, from the perspective of season-dependent effects on DOM. To this end, behavior of organics was evaluated over a 22-month period during treatment in a local constructed wetlands facility. The second goal was to examine changes in character of EfOM that accompany passage through natural treatment systems (either constructed wetlands or soil aquifer treatment, SAT). This was accomplished via isolation and characterization of organics collected along flowpaths of these treatment systems. Wetland effluent concentrations of dissolved organic carbon (DOC) and nonbiodegradable DOC were positively correlated with temperature. That is, the highest concentrations occurred in summer and were attributed to the combined effects of evapotranspiration (ET) by wetland vegetation along with production of wetland-derived natural organic matter (NOM). There was little if any change in the hydrophobic-hydrophilic character of DOM attending wetland treatment. Biodegradation of labile EfOM combined with contribution of wetland-derived NOM resulted in modest (at best) changes in distribution of carbon moieties in hydrophobic (HPO) and hydrophilic (HPI) acid isolates. Aliphatic carbon decreased during wetland treatment. Elemental analysis suggested that microbial activity is the dominant process controlling the character of wetland-derived NOM. Reactivity of isolates in forming trihalomethanes (THMs) during chlorination increased as consequence of wetland treatment. Wetland-derived NOM was more reactive than EfOM in forming THMs. Uniform trends occurred among isolates of EfOM and wetland-derived NOM between biodegradability and THM production upon chlorination. Ultrahydrophilic EfOM was preferentially removed during vadose zone percolation of secondary effluent. The chemical character of EfOM (HPO- and HPI-acids) became more similar to NOM as a consequence of SAT. Genotoxicity of HPO-acids, on a per mass basis, increased after SAT.
    Thumbnail

    Remote-Sensing Soil Moisture Using Four-Dimensional Data Assimilation.

    Houser, Paul Raymond,1970- (The University of Arizona., 1996)
    The feasibility of synthesizing distributed fields of remotely-sensed soil moisture by the novel application of four-dimensional data assimilation applied in a hydrological model was explored in this study. Six Push Broom Microwave Radiometer images gathered over Walnut Gulch, Arizona were assimilated into the TOPLATS hydrological model. Several alternative assimilation procedures were implemented, including a method that adjusted the statistics of the modeled field to match those in the remotely sensed image, and the more sophisticated, traditional methods of statistical interpolation and Newtonian nudging. The high observation density characteristic of remotely-sensed imagery poses a massive computational burden when used with statistical interpolation, necessitating observation reduction through subsampling or averaging. For Newtonian nudging, the high observation density compromises the conventional weighting assumptions, requiring modified weighting procedures. Remotely-sensed soil moisture images were found to contain horizontal correlations that change with time and have length scales of several tens of kilometers, presumably because they are dependent on antecedent precipitation patterns. Such correlation therefore has a horizontal length scale beyond the remotely sensed region that approaches or exceeds the catchment scale. This suggests that remotely-sensed information can be advected beyond the image area and across the whole catchment. The remotely-sensed data was available for a short period providing limited opportunity to investigate the effectiveness of surface-subsurface coupling provided by alternative assimilation procedures. Surface observations were advected into the subsurface using incomplete knowledge of the surface-subsurface correlation measured at only 2 sites. It is perceived that improved vertical correlation specification will be a need for optimal soil moisture assimilation. Based on direct measurement comparisons and the plausibility of synthetic soil moisture patterns, Newtonian nudging assimilation procedures were preferred because they preserved the observed patterns within the sampled region, while also calculating plausible patterns in unmeasured regions. Statistical interpolation reduced to the trivial limit of direct data insertion in the sampled region and gave less plausible patterns outside this region. Matching the statistics of the modeled fields to those observed provided plausible patterns, but the observed patterns within sampled area were largely lost.
    Thumbnail

    Thermodynamic and isotopic systematics of chromium chemistry

    Ball, James William,1945- (The University of Arizona., 1996)
    This investigation has produced four major results: (1) Thermodynamic properties of chromium metal, aqueous ions, hydrolysis species, oxides and hydroxides were compiled. Data were critically evaluated, some data were recalculated, and thermodynamic properties were selected. (2) A method was developed for separating chromium from its natural water matrix using sequential anion and cation exchange chromatography. (3) A method for determining the ⁵³Cr/⁵²Cr ratio using solid-source thermal ionization mass spectrometry with the silica gel-boric acid ionization- - enhancement technique was developed. (4) Ground water samples from six locations were analyzed for their ⁵³Cr/⁵²Cr ratio using the above methods. Results from carefully measured electromotive force (emf) values for the reduction of Cr³⁺ to Cr²⁺ were recalculated for compatibility with the infinite dilution standard state, and a revised ∆G°(f) for Cr²⁺(aq) was calculated. Equilibrium constants for chromium(III) hydrolysis were taken from Rai, et al. (1987) and for chromium(VI) hydrolysis from Palmer, et al. (1987). The ion exchange method is based on retention of chromium(VI) on strongly basic anion exchange resin at pf1 4 and its reductive elution with 2N HNO₃ . Chromium(III) is retained on strongly acidic cation exchange resin at pH 1.3 and eluted with 5N HNO₃. Possible interferents include metals that form both oxyanions and cations. High-purity reagents and containers made of rigorously cleanable noncontaminating materials are required. Samples for mass spectrometry are pretreated with aqua regia and concentrated nitric acid, then mixed with silica and boric acid and transferred to the tantalum filament of a stainless steel and glass sample holder. The ⁵³Cr/⁵²Cr ratio was measured to avoid isobaric interferences with iron. To be significantly different from each other, isotopic signatures must differ by at least 0.5 per mil. Samples from six locations were examined for their ⁵³Cr/⁵²Cr ratio. For the samples with natural origin, the spread in δ⁵³Cr values of-2.0 to +3.0 per mil suggests that samples of chromium derived from differing source materials or from different geographic locations have distinct isotopic signatures. Conclusions regarding source-related variations in the isotopic signature of contaminant chromium are problematic, because specific information about the respective source materials is lacking.
    Thumbnail

    Investigation of the atmosphere-snow transfer process for hydrogen peroxide

    McConnell, Joseph Robert, 1958- (The University of Arizona., 1997)
    Of the three primary atmospheric oxidants, hydroxyl radical, ozone, and hydrogen peroxide (H₂O₂), only the latter is preserved in ice cores. To make quantitative use of the ice core archive, however, requires a detailed understanding of the physical processes that relate atmospheric concentrations to those in the snow, firn and thence ice. The transfer processes for H₂O₂ were investigated using field, laboratory, and computer modeling studies. Empirically and physically based numerical algorithms were developed to simulate the atmosphere-to-snow-to-firn transfer processes and these models coupled to a snow pack accumulation model. The models, tested using field data from Summit, Greenland and South Pole, indicate that H₂O₂ is reversibly deposited to the snow surface, with subsequent uptake and release controlled by advection of air containing H₂O₂ through the top meters of the snow pack and temperature-driven diffusion within individual snow grains. This physically based model was successfully used to invert year-round surface snow concentrations to an estimate of atmospheric H₂O₂ at South Pole. Field data and model results clarify the importance of accumulation timing and seasonality in determining the H₂O₂ record preserved in the snow pack. A statistical analysis of recent accumulation patterns at South Pole indicates that spatial variability in accumulation has a strong influence on chemical concentrations preserved in the snow pack.
    Thumbnail

    Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System

    Hong, Yang (The University of Arizona., 2003)
    Precipitation estimation from satellite information (VISIBLE , IR, or microwave) is becoming increasingly imperative because of its high spatial/temporal resolution and board coverage unparalleled by ground-based data. After decades' efforts of rainfall estimation using IR imagery as basis, it has been explored and concluded that the limitations/uncertainty of the existing techniques are: (1) pixel-based local-scale feature extraction; (2) IR temperature threshold to define rain/no-rain clouds; (3) indirect relationship between rain rate and cloud-top temperature; (4) lumped techniques to model high variability of cloud-precipitation processes; (5) coarse scales of rainfall products. As continuing studies, a new version of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network (PERSIANN), called Cloud Classification System (CCS), has been developed to cope with these limitations in this dissertation. CCS includes three consecutive components: (1) a hybrid segmentation algorithm, namely Hierarchically Topographical Thresholding and Stepwise Seeded Region Growing (HTH-SSRG), to segment satellite IR images into separated cloud patches; (2) a 3D feature extraction procedure to retrieve both pixel-based local-scale and patch-based large-scale features of cloud patch at various heights; (3) an ANN model, Self-Organizing Nonlinear Output (SONO) network, to classify cloud patches into similarity-based clusters, using Self-Organizing Feature Map (SOFM), and then calibrate hundreds of multi-parameter nonlinear functions to identify the relationship between every cloud types and their underneath precipitation characteristics using Probability Matching Method and Multi-Start Downhill Simplex optimization techniques. The model was calibrated over the Southwest of United States (100°--130°W and 25°--45°N) first and then adaptively adjusted to the study region of North America Monsoon Experiment (65°--135°W and 10°--50°N) using observations from Geostationary Operational Environmental Satellite (GOES) IR imagery, Next Generation Radar (NEXRAD) rainfall network, and Tropical Rainfall Measurement Mission (TRMM) microwave rain rate estimates. CCS functions as a distributed model that first identifies cloud patches and then dispatches different but the best matching cloud-precipitation function for each cloud patch to estimate instantaneous rain rate at high spatial resolution (4km) and full temporal resolution of GOES IR images (every 30-minute). Evaluated over a range of spatial and temporal scales, the performance of CCS compared favorably with GOES Precipitation Index (GPI), Universal Adjusted GPI (UAGPI), PERSIANN, and Auto-Estimator (AE) algorithms, consistently. Particularly, the large number of nonlinear functions and optimum IR-rain rate thresholds of CCS model are highly variable, reflecting the complexity of dominant cloud-precipitation processes from cloud patch to cloud patch over various regions. As a result, CCS can more successfully capture variability in rain rate at small scales than existing algorithms and potentially provides rainfall product from GOES IR-NEXARD-TRMM TMI (SSM/I) at 0.12° x 0.12° and 3-hour resolution with relative low standard error (∼=3.0mm/hr) and high correlation coefficient (∼=0.65).
    Thumbnail

    Multiscale anaylses of permeability in porous and fractured media

    Hyun, Yunjung. (The University of Arizona., 2002)
    It has been shown by Neuman [1990], Di Federico and Neuman [1997, 1998a,b] and Di Federico et al. [1999] that observed multiscale behaviors of subsurface fluid flow and transport variables can be explained within the context of a unified stochastic framework, which views hydraulic conductivity as a random fractal characterized by a power variogram. Any such random fractal field is statistically nonhomogeneous but possesses homogeneous spatial increments. When the field is statistically isotropic, it is associated with a power variogram γ(s) = Cs²ᴴ where C is a constant, s is separation distance, and If is a Hurst coefficient (0 < H< 1). If the field is Gaussian it constitutes fractional Brownian motion (fBm). The authors have shown that the power variogram of a statistically isotropic or anisotropic fractal field can be constructed as a weighted integral from zero to infinity of exponential or Gaussian vario grams of overlapping, homogeneous random fields (modes) having mutually uncorrelated increments and variance proportional to a power 2H of the integral (spatial correlation) scale. Low- and high-frequency cutoffs are related to length scales of the sampling window (domain) and data support (sample volume), respectively. Intermediate cutoffs account for lacunarity due to gaps in the multiscale hierarchy, created by a hiatus of modes associated with discrete ranges of scales. In this dissertation, I investigate the effects of domain and support scales on the multiscale properties of random fractal fields characterized by a power variogram using real and synthetic data. Neuman [1994] and Di Federico and Neuman [1997] have concluded empirically, on the basis of hydraulic conductivity data from many sites, that a finite window of length-scale L filters out (truncates) all modes having integral scales λ larger than λ = μL where μ ≃ 1/3. I confii in their finding computationally by generating truncated fBm realizations on a large grid, using various initial values of μ, and demonstrating that μ ≃ 1/3 for windows smaller than the original grid. My synthetic experiments also show that generating an fl3m realization on a finite grid using a truncated power variogram yields sample variograms that are more consistent with theory than those obtained when the realization is generated using a power variogram. Interpreting sample data from such a realization using wavelet analysis yields more reliable estimates of the Hurst coefficient than those obtained when one employs variogram analysis. Di Federico et al. [1997] developed expressions for the equivalent hydraulic conductivity of a box-shaped support volume, embedded in a log-hydraulic conductivity field characterized by a power variogram, under the action of a mean uniform hydraulic gradient. I demonstrate that their expression and empirically derived value of μ ≃ 1/3 are consistent with a pronounced permeability scale effect observed in unsaturated fractured tuff at the Apache Leap Research Site (ALRS) near Superior, Arizona. I then investigate the compatibility of single-hole air permeability data, obtained at the ALRS on a nominal support scale of about 1 m, with various scaling models including fBm, fGn (fractional Gaussian noise), fLm (fractional Lévy motion), bfLm (bounded fractional Lévy motion) and UM (Universal Multifractals). I find that the data have a Lévy-like distribution at small lags but become Gaussian as the lag increases (corresponding to bfLm). Though this implies multiple scaling, it is not consistent with the UM model, which considers a unique distribution. If one nevertheless applies a UM model to the data, one obtains a very small codimension which suggests that multiple scaling is of minor consequence (applying the UM model to permeability rather than log-permeability data yields a larger codimension but is otherwise not consistent with these data). Variogram and resealed range analyses of the log-permeability data yield comparable estimates of the Hurst coefficient. Resealed range analysis shows that the data are not compatible with an fGn model. I conclude that the data are represented most closely by a truncated fBm model.
    Thumbnail

    Deciding to Recharge

    Eden, Susanna (The University of Arizona., 1999)
    Public water policy decision making tends to be too complex and dynamic to be described fully by traditional, rational models. Information intended to improve decisions often is rendered ineffective by a failure to understand the process. An alternative, holistic description of how such decisions actually are made is presented here and illustrated with a case study. The role of information in the process is highlighted. Development of a Regional Recharge Plan for Tucson, Arizona is analyzed as the case study. The description of how decisions are made is based on an image of public water policy decision making as 1) a structured, nested network of individuals and groups with connections to their environment through their senses, mediated by their knowledge; and 2) a nonlinear process in which decisions feed back to affect the preferences and intentions of the people involved, the structure of their interactions, and the environment in which they operate. The analytical components of this image are 1) the decision makers, 2) the relevant features of their environment, 3) the structure of their interactions, and 4) the products or outputs of their deliberations. Policy decisions analyzed by these components, in contrast to the traditional analysis, disclose a new set of relationships and suggest a new view of the uses of information. In context of information use, perhaps the most important output of the decision process is a shared interpretation of the policy issue. This interpretation sets the boundaries of the issue and the nature of issue-relevant information. Participants are unlikely to attend to information incompatible with the shared interpretation. Information is effective when used to shape the issue interpretation, fill specific gaps identified as issue-relevant during the process, rationalize choices, and reshape the issue interpretation as the issue environment evolves.
    • 1
    • 2
    • 3
    • 4
    • . . .
    • 20
    The University of Arizona Libraries | 1510 E. University Blvd. | Tucson, AZ 85721-0055
    Tel 520-621-6442 | repository@u.library.arizona.edu
    DSpace software copyright © 2002-2017  DuraSpace
    Quick Guide | Contact Us | Send Feedback
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.