ABOUT THIS COLLECTION

This open access archive contains publications from University of Arizona faculty, researchers and staff, primarily open-access versions of formally published journal articles. The collection includes published articles and final accepted manuscripts submitted by UA faculty under the UA Open Access Policy. The collection also includes books, book chapters, book reviews, presentations, data, and other scholarly materials submitters have chosen to make available in the repository.


Submit Content

  • Log in to the repository using your NetID and password
  • Click the "Submissions" link in the left sidebar (under "My Account")
  • Start a new submission in the UA Faculty Publications collection.
  • Library staff will check publisher policies, including embargo periods related to your submission.
  • You will receive an email with a persistent link to your submission when it is approved.

Questions?

Contact open-access@email.arizona.edu with your questions about the UA Faculty Publications collection.

Recent Submissions

  • H0LiCOW – X. Spectroscopic/imaging survey and galaxy-group identification around the strong gravitational lens system WFI 2033−4723

    Sluse, D; Rusu, C E; Fassnacht, C D; Sonnenfeld, A; Richard, J; Auger, M W; Coccato, L; Wong, K C; Suyu, S H; Treu, T; et al. (OXFORD UNIV PRESS, 2019-09-05)
    Galaxies and galaxy groups located along the line of sight towards gravitationally lensed quasars produce high-order perturbations of the gravitational potential at the lens position. When these perturbation are too large, they can induce a systematic error on H0 of a few per cent if the lens system is used for cosmological inference and the perturbers are not explicitly accounted for in the lens model. In this work, we present a detailed characterization of the environment of the lens system WFI 2033−4723 (⁠zsrc=1.662, zlens=0.6575), one of the core targets of the H0LiCOW project for which we present cosmological inferences in a companion paper. We use the Gemini and ESO-Very Large telescopes to measure the spectroscopic redshifts of the brightest galaxies towards the lens, and use the ESO-MUSE integral field spectrograph to measure the velocity-dispersion of the lens (⁠σlos=250+15−21  km s−1) and of several nearby galaxies. In addition, we measure photometric redshifts and stellar masses of all galaxies down to i < 23 mag, mainly based on Dark Energy Survey imaging (DR1). Our new catalogue, complemented with literature data, more than doubles the number of known galaxy spectroscopic redshifts in the direct vicinity of the lens, expanding to 116 (64) the number of spectroscopic redshifts for galaxies separated by less than 3 arcmin (2 arcmin ) from the lens. Using the flexion-shift as a measure of the amplitude of the gravitational perturbation, we identify two galaxy groups and three galaxies that require specific attention in the lens models. The ESO MUSE data enable us to measure the velocity-dispersions of three of these galaxies. These results are essential for the cosmological inference analysis presented in Rusu et al.
  • X-ray Lightcurves from Realistic Polar Cap Models: Inclined Pulsar Magnetospheres and Multipole Fields

    Lockhart, Will; Gralla, Samuel E; Özel, Feryal; Psaltis, Dimitrios; Univ Arizona, Dept Phys; Univ Arizona, Dept Astron; Univ Arizona, Steward Observ (OXFORD UNIV PRESS, 2019-09-09)
    Thermal X-ray emission from rotation-powered pulsars is believed to originate from localized ‘hotspots’ on the stellar surface occurring where large-scale currents from the magnetosphere return to heat the atmosphere. Light-curve modelling has primarily been limited to simple models, such as circular antipodal emitting regions with constant temperature. We calculate more realistic temperature distributions within the polar caps, taking advantage of recent advances in magnetospheric theory, and we consider their effect on the predicted light curves. The emitting regions are non-circular even for a pure dipole magnetic field, and the inclusion of an aligned magnetic quadrupole moment introduces a north–south asymmetry. As the quadrupole moment is increased, one hotspot grows in size before becoming a thin ring surrounding the star. For the pure dipole case, moving to the more realistic model changes the light curves by 5−10percent for millisecond pulsars, helping to quantify the systematic uncertainty present in current dipolar models. Including the quadrupole gives considerable freedom in generating more complex light curves. We explore whether these simple dipole+quadrupole models can account for the qualitative features of the light curve of PSR J0437−4715.
  • Analysis, Simulation, and Optimization of Stochastic Vesicle Dynamics in Synaptic Transmission

    Zhang, Calvin; Peskin, Charles S.; Univ Arizona, Dept Math (WILEY, 2020-01)
    Synaptic transmission is the mechanism of information transfer from one neuron to another (or from a neuron to a muscle or to an endocrine cell). An important step in this physiological process is the stochastic release of neurotransmitter from vesicles that fuse with the presynaptic membrane and spill their contents into the synaptic cleft. We are concerned here with the formulation, analysis, and simulation of a mathematical model that describes the stochastic docking, undocking, and release of synaptic vesicles and their effect on synaptic signal transmission. The focus of this paper is on the parameter p(0), the probability of release for each docked vesicle when an action potential arrives. We study the influence of this parameter on the statistics of the release process and on the theoretical capability of the model synapse in reconstructing various desired outputs based on the timing and amount of neurotransmitter release. This theoretical capability is assessed by formulating and solving an optimal filtering problem. Methods for parameter identification are proposed and applied to simulated data. (c) 2019 Wiley Periodicals, Inc.
  • Code Obfuscation: Why is This Still a Thing?

    Collberg, Christian; Univ Arizona, Dept Comp Sci (ASSOC COMPUTING MACHINERY, 2018)
    Early developments in code obfuscation were chiefly motivated by the needs of Digital Rights Management (DRM) [7]. Other suggested applications included intellectual property protection of software [4] and code diversification to combat the monoculture problem of operating systems [2]. Code obfuscation is typically employed in security scenarios where an adversary is in complete control over a device and the software it contains and can tamper with it at will. We call such situations the Man-At-The-End (MATE) [3] scenario. MATE scenarios are the best of all worlds for attackers and, consequently, the worst of all worlds for defenders: Not only do attackers have physical access to a device and can reverse engineer and tamper with it at their leisure, they often have unbounded resources (time, computational power, etc.) to do so. Defenders, on the other hand, are often severely constrained in the types of protective techniques available to them and the amount of overhead they can tolerate. In other words, there is an asymmetry between the constraints of attackers and defenders. Moreover, DRM is becoming less prevalent (songs for sale on the Apple iTunes Store are no longer protected by DRM, for example); there are new cryptographically-based obfuscation techniques [1] that promise provably secure obfuscation; secure enclaves [5] are making it into commodity hardware, providing a safe haven for security sensitive code; and recent advances in program analysis [12] and generic de-obfuscation [13] provide algorithms that render current code obfuscation techniques impotent. Thus, one may reasonably ask the question: "Is Code Obfuscation Still a Thing?" Somewhat surprisingly, it appears that the answer is yes. In a recent report, Gartner [14] lists 19 companies active in this space (8 of which were founded since 2010) and there are still (in 2017) many papers published on code obfuscation, code de-obfuscation, anti-tamper protection, reverse engineering, and related technologies. One of the reasons for this resurgence of code obfuscation as a protective technology is that, more and more, we are faced with applications where security-sensitive code needs to run on unsecured endpoints. In this talk we will show MATE attacks that appear in many novel and unlikely scenarios, including smart cars [6], smart meters [9], mobile applications such as Snapchat and smartphone games, Internet of Things applications [8], and ad blockers in web browsers [11]. We will furthermore show novel code obfuscation techniques that increase the workload of attackers [10] and which, at least for a time, purport to restore the symmetry between attackers and defenders.
  • Single-shot phase retrieval with complex diversity

    Eguchi, Akira; Milster, Tom D; Univ Arizona, Ctr Opt Sci (OPTICAL SOC AMER, 2019-11-01)
    The concept of complex diversity is introduced that adequately accounts for special considerations in the design of the system and the reconstruction algorithm for single-shot phase retrieval techniques. Complex-number pupil filters containing both amplitude and phase values are extracted by numerical propagation from a computer-generated hologram design, which generates multiple images in a single acquisition. The reconstruction is performed by a Fourier iterative algorithm modified with an area restriction to avoid noise amplification. Numerical simulations show that the complex diversity technique estimates extrinsic Kolmogorov aberration better than conventional single-shot techniques for a distant point object. Experiments show that sensorless adaptive optics correction is achieved using the complex diversity technique. (C) 2019 Optical Society of America
  • Is greed contagious? Four experimental studies

    Cardella, Eric; Kugler, Tamar; Anderson, Jennifer; Connolly, Terry; Univ Arizona, Eller Coll Management (WILEY, 2019-12)
    Do people become greedier when interacting with others they perceive to be greedy? It has been speculated that greed contagion exits and may have influenced the 2008 financial collapse. We examined this possibility in four experimental studies using a common pool resource dilemma. Specifically, whether participants' second-round (R2) withdrawal from the common pool was influenced (a) by their assessment of how greedy their opponents' first-round (R1) withdrawal was, (b) by R1 opponents' reputation for being greedy, (c) by observing past behavior of others in unrelated interactions, and (d) when R1 opponents directly confronted them with an assessment of their own greediness of their R1 withdrawal. In addition, Study 2 examined R2 interactions involving new opponents. Taken together, results suggest that there is contagion of greed. However, the connection appears to be driven by participants adjusting to their opponent's actual behavior, not by their evaluation of the greediness of such behavior. It seems that perceptions of greed do not mediate future behavior and, thus, are not necessarily contagious, but norms of selfish behavior are. In this sense, greed perceptions appear to by epiphenomenal in that they are an incidental by-product of the behavioral interaction. We discuss the implications of these findings and suggest directions for further research.
  • The development and validation of the Planet Formation Concept Inventory

    Simon, Molly N.; Prather, Edward E.; Buxner, Sanlyn R.; Impey, Chris D.; Univ Arizona, Dept Astron & Steward Observ; Univ Arizona, Dept Teaching Learning & Sociocultural Studies (ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2019-11-03)
    The discovery and characterisation of planets orbiting distant stars has shed light on the origin of our own Solar System. It is important that college-level introductory astronomy students have a general understanding of the planet formation process before they are able to draw parallels between extrasolar systems and our own Solar System. In this work, we introduce the Planet Formation Concept Inventory (PFCI), an educational research tool used to assess student learning on the topic of planet formation. The PFCI Version 3 was administered to N = 561 students pre-instruction and N = 374 students post-instruction. Here, we present a Classical Test Theory (CTT) analysis of the PFCI Version 3. Ultimately, we conclude that the PFCI is a reliable and valid instrument that can differentiate experts from novices, and can be used to assess college-level introductory astronomy students' learning on the topic of planet formation. Initial findings on class normalised gain scores indicate that the PFCI may be capable of assessing the effectiveness of different instructional models. In the future, we recommend a national study of the PFCI to discern its ability to provide insight regarding the ascribed characteristics of learners and the effectiveness of different instructional strategies being used to teach this topic.
  • The Bolsonaro Election, Antiblackness, and Changing Race Relations in Brazil

    da Silva, Antonio José Bacelar; Larkins, Erika Robb; Univ Arizona (WILEY, 2019-11-11)
    We apply the concept of antiblackness and a Deleuzian approach to sociopolitical events to analyze Jair Bolsonaro's 2018 election in Brazil. Historically, Brazilians turned from overt expressions of antiblackness to subtler forms of racial prejudice, what Sergio Buarque de Holanda (1956) called the "cordial man" who practiced a "gentlemanly" form of white supremacy. Recently, however, cordial racism has eroded in favor of more virulent and explosive manifestations of antiblackness that fueled the sociopolitical climate that enabled Bolsonaro's rise to power. We examine the antiblack backlash against race-conscious laws and policies implemented during the Workers' Party era (2002-16), showing a gradual shift toward more overt expressions of antiblackness that Bolsonaro wielded to political effect in his 2018 campaign.
  • Differential resistance and resilience of functional groups to livestock grazing maintain ecosystem stability in an alpine steppe on the Qinghai-Tibetan Plateau

    Ganjurjav, Hasbagan; Zhang, Yong; Gornish, Elise S; Hu, Guozheng; Li, Yue; Wan, Yunfan; Gao, Qingzhu; Univ Arizona, Sch Nat Resources & Environm (ACADEMIC PRESS LTD- ELSEVIER SCIENCE LTD, 2019-12-01)
    Ecosystem stability is one of the main factors maintaining ecosystem functioning and is closely related to temporal variability in productivity. Resistance and resilience reflect tolerance and recovering ability, respectively, of a plant community under perturbation, which are important for maintaining the stability of ecosystems. Generally, heavy grazing reduces the stability of grassland ecosystems, causing grassland degradation. However, how livestock grazing affects ecosystem stability is unclear in alpine steppe ecosystems. We conducted a five-year grazing experiment with Tibetan sheep in a semi-arid alpine steppe on the Qinghai-Tibetan Plateau, China. The experimental treatments included no grazing (NG), light grazing (LG, 2.4 sheep per ha), moderate grazing (MG, 3.6 sheep per ha) and heavy grazing (HG, 6.0 sheep ha). We calculated resistance and resilience of three plant functional groups and ecosystem stability under the three grazing intensities using aboveground primary productivity. The results showed that with increasing grazing intensity, aboveground biomass of each functional group significantly decreased. As grazing intensity increased, the resistance of forbs first increased then decreased. The resilience of graminoids in HG was significantly lower than in LG plots, but the resilience of legumes in HG was higher than in LG and MG plots. The resilience of graminoids was significantly higher than legume and forbs under LG and MG treatments. In HG treatments, resilience of legumes was higher than graminoids and forbs. Ecosystem stability did not change under different grazing intensities, because of dissimilar performance of the resilience and resistance of functional groups. Our results highlight how the differential resistance and resilience of different function groups facilitate the tolerance of alpine steppe to grazing under even a heavy intensity. However, the degradation risk of alpine steppe under heavy grazing still needs to be considered in grassland management due to sharp decreases of productivity.
  • Using strip seeding to test how restoration design affects randomness of community assembly

    Gornish, Elise S.; Shaw, Julea; Gillespie, Breahna M.; Univ Arizona, Sch Nat Resources & Environm (WILEY, 2019-11)
    The reestablishment and enhancement of plant diversity is typically a priority for restoration practitioners. Since diversity and stability can be affected by the magnitude to which randomness drives community dynamics, modifying randomness (via habitat heterogeneity) could provide utility for vegetation managers. We investigated the value of using strip seeding to manipulate the magnitude to which randomness structures plant communities across a grassland in Davis, California. Five years after restoring portions of a degraded site (0, 33, 50, 66, and 100% of an area) to create patches of seeded and unseeded strips, we assessed the amount of Jaccard dissimilarity across quadrats within strips and estimated the magnitude to which randomness contributed to community assembly (termed the nugget). We found higher nuggets in the 66 and 33% seeding treatment levels compared to the 0, 50, and 100% seeding treatment levels. In the 33 and 66% level of the seeding treatment, we also found that unseeded strips, which are regularly exposed to random events of dispersal from seeded strips, had a higher nugget than seeded strips. This work suggests that strategic seeding techniques that enhance habitat heterogeneity can increase the role of randomness in community dynamics. Strip seeding strategies appear to provide utility as a tool to indirectly enhance diversity across a degraded site.
  • On Entropy Minimization and Convergence

    Dostoglou, S.; Hughes, A.; Xue, Jianfei; Univ Arizona, Dept Math (SPRINGER, 2019-11)
    We examine the minimization of information entropy for measures on the phase space of bounded domains, subject to constraints that are averages of grand canonical distributions. We describe the set of all such constraints and show that it equals the set of averages of all probability measures absolutely continuous with respect to the standard measure on the phase space (with the exception of the measure concentrated on the empty configuration). We also investigate how the set of constrains relates to the domain of the microcanonical thermodynamic limit entropy. We then show that, for fixed constraints, the parameters of the corresponding grand canonical distribution converge, as volume increases, to the corresponding parameters (derivatives, when they exist) of the thermodynamic limit entropy. The results hold when the energy is the sum of any stable, tempered interaction potential that satisfies the Gibbs variational principle (e.g. Lennard-Jones) and the kinetic energy. The same tools and the strict convexity of the thermodynamic limit pressure for continuous systems (valid whenever the Gibbs variational principle holds) give solid foundation to the folklore local homeomorphism between thermodynamic and macroscopic quantities.
  • Fast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networks

    Katib, Anas; Rao, Praveen; Barnard, Kobus; Kamhoua, Charles; Univ Arizona, Dept Comp Sci (ASSOC COMPUTING MACHINERY, 2019-06)
    In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning. While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques. We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption. We also discuss how our approach is capable of efficiently recomputing scores when new data are available. We conducted a comprehensive evaluation of our approach and compared with the MapReduce-style computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.
  • Randomization procedures in single-case intervention research contexts: (Some of) "the rest of the story"

    Levin, Joel R; Kratochwill, Thomas R; Ferron, John M; Univ Arizona (WILEY, 2019-11-01)
    Following up on articles recently published in this journal, the present contribution tells (some of) "the rest of the story" about the value of randomization in single-case intervention research investigations. Invoking principles of internal, statistical-conclusion, and external validity, we begin by emphasizing the critical distinction between design randomization and analysis randomization, along with the necessary correspondence between the two. Four different types of single-case design-and-analysis randomization are then discussed. The persistent negative influence of serially dependent single-case outcome observations is highlighted, accompanied by examples of inappropriate applications of parametric and nonparametric tests that have appeared in the literature. We conclude by presenting valid applications of single-case randomization procedures in various single-case intervention contexts, with specific reference to a freely available Excel-based software package that can be accessed to incorporate the present randomization schemes into a wide variety of single-case intervention designs and analyses.
  • An in situ investigation on the origins and processing of circumstellar oxide and silicate grains in carbonaceous chondrites

    Zega, Thomas J.; Haenecour, Pierre; Floss, Christine; Univ Arizona, Lunar & Planetary Lab; Univ Arizona, Dept Mat Sci & Engn (WILEY, 2019-11-13)
    We report on the isotopic, chemical, and structural properties of four O-rich presolar grains identified in situ in the Adelaide ungrouped C2, LaPaZ Icefield (LAP) 031117 CO3.0, and Dominion Range (DOM) 08006 CO3.0 chondrites. All four grains have oxygen-isotopic compositions consistent with origins in the circumstellar envelopes (CSE) of low-mass O-rich stars evolved along the red-giant and asymptotic-giant branch (RGB, AGB, respectively) of stellar evolution. Transmission electron microscope (TEM) analyses, enabled by focused-ion-beam scanning electron microscope extraction, show that the grain from Adelaide is a single-crystal Mg-Al spinel, and comparison with equilibrium thermodynamic predictions constrains its condensation to 1500 K assuming a total pressure <= 10(-3) atm in its host CSE. In comparison, TEM analysis of two grains identified in the LAP 031117 chondrite exhibits different microstructures. Grain LAP-81 is composed of olivine containing a Ca-rich and a Ca-poor domain, both of which show distinct orientations, suggesting changing thermodynamic conditions in the host CSE that cannot be precisely constrained. LAP-104 contains a polycrystalline assemblage of ferromagnesian silicates similar to previous reports of nanocrystalline presolar Fe-rich silicates that formed under nonequilibrium conditions. Lastly, TEM shows that the grain extracted from DOM 08006 is a polycrystalline assemblage of Cr-bearing spinel. The grains occur in different orientations, likely reflecting mechanical assembly in their host CSE. The O-isotopic and Cr-rich compositions appear to point toward nonequilibrium condensation. The spinel is surrounded by an isotopically solar pyroxene lacking long-range atomic order and could have served as a nucleation site for its condensation in the interstellar medium or the inner solar protoplanetary disk.
  • Hearing care across the life course provided in the community

    Suen, Jonathan J; Bhatnagar, Kaustubh; Emmett, Susan D; Marrone, Nicole; Kleindienst Robler, Samantha; Swanepoel, De Wet; Wong, Aileen; Nieman, Carrie L; Univ Arizona, Dept Speech Language & Hearing Sci (WORLD HEALTH ORGANIZATION, 2019-10)
    Untreated hearing loss is recognized as a growing global health priority because of its prevalence and harmful effects on health and well-being. Until recently, little progress had been made in expanding hearing care beyond traditional clinic-based models to incorporate public health approaches that increase accessibility to and affordability of hearing care. As demonstrated in numerous countries and for many health conditions, sharing health-care tasks with community health workers (CHWs) offers advantages as a complementary approach to expand health-service delivery and improve public health. This paper explores the possibilities of task shifting to provide hearing care across the life course by reviewing several ongoing projects in a variety of settings - Bangladesh, India, South Africa and the United States of America. The selected programmes train CHWs to provide a range of hearing-care services, from childhood hearing screening to management of age-related hearing loss. We discuss lessons learnt from these examples to inform best practices for task shifting within community-delivered hearing care. Preliminary evidence supports the feasibility, acceptability and effectiveness of hearing care delivered by CHWs in these varied settings. To make further progress, community-delivered hearing care must build on established models of CHWs and ensure adequate training and supervision, delineation of the scope of practice, supportive local and national legislation, incorporation of appropriate technology and analysis of programme costs and cost-effectiveness. In view of the growing evidence, community-delivered hearing care may now be a way forward to improve hearing health equity.
  • Search for a right-handed gauge boson decaying into a high-momentum heavy neutrino and a charged lepton in pp collisions with the ATLAS detector at root s=13 TeV

    Berlendis, S.; Cheu, E.; Delitzsch, C.M.; Johns, K.A.; Jones, S.; Lampl, W.; LeBlanc, M.; Leone, R.; Loch, P.; Rutherfoord, J.P.; et al. (ELSEVIER, 2019-11-10)
    A search for a right-handed gauge boson W-R, decaying into a boosted right-handed heavy neutrino N-R, in the framework of Left-Right Symmetric Models is presented. It is based on data from proton-proton collisions with a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the Large Hadron Collider during the years 2015, 2016 and 2017, corresponding to an integrated luminosity of 80 fb(-1). The search is performed separately for electrons and muons in the final state. A distinguishing feature of the search is the use of large-radius jets containing electrons. Selections based on the signal topology result in smaller background compared to the expected signal. No significant deviation from the Standard Model prediction is observed and lower limits are set in the W-R and N-R mass plane. Mass values of the W-R smaller than 3.8-5 TeV are excluded for N-R in the mass range 0.1-1.8 TeV. (C) 2019 The Author. Published by Elsevier B.V.
  • Magnetochronology of the Entire Chinle Formation (Norian Age) in a Scientific Drill Core From Petrified Forest National Park (Arizona, USA) and Implications for Regional and Global Correlations in the Late Triassic

    Kent, Dennis V.; Olsen, Paul E.; Lepre, Christopher; Rasmussen, Cornelia; Mundil, Roland; Gehrels, George E.; Giesler, Dominique; Irmis, Randall B.; Geissman, John W.; Parker, William G.; et al. (AMER GEOPHYSICAL UNION, 2019)
    Building on an earlier study that confirmed the stability of the 405-kyr eccentricity climate cycle and the timing of the Newark-Hartford astrochronostratigraphic polarity time scale back to 215 Ma, we extend the magnetochronology of the Late Triassic Chinle Formation to its basal unconformity in scientific drill core PFNP-1A from Petrified Forest National Park (Arizona, USA). The 335-m-thick Chinle section is imprinted with paleomagnetic polarity zones PF1r to PF10n, which we correlate to chrons E17r to E9n (209 to 224 Ma) of the Newark-Hartford astrochronostratigraphic polarity time scale. A sediment accumulation rate of 34 m/Myr can be extended down to 270 m, close to the base of the Sonsela Member and the base of magnetozone PF5n, which we correlate to chron E14n that onsets at 216.16 Ma. Magnetozones PF5r to PF10n in the underlying 65-m-thick section of the mudstone-dominated Blue Mesa and Mesa Redondo members plausibly correlate to chrons E13r to E9n, indicating a sediment accumulation rate of only 10 m/Myr. Published high-precision U-Pb detrital zircon dates from the lower Chinle tend to be several million years older than the magnetochronological age model. The source of this discrepancy is unclear but may be due to sporadic introduction of juvenile zircons that get recycled. The new magnetochronological constraint on the base of the Sonsela Member brings the apparent timing of the included Adamanian-Revueltian land vertebrate faunal zone boundary and the Zone II to Zone III palynofloral transition closer to the temporal range of the 215 Ma Manicouagan impact structure in Canada.
  • Water splitting promoted by electronically conducting interlayer material in bipolar membranes

    Chen, Yingying; Martínez, Rodrigo J.; Gervasio, Don; Baygents, James C.; Farrell, James; Univ Arizona, Dept Chem & Environm Engn (SPRINGER, 2019-11-06)
    Bipolar membranes are used in a variety of industrial applications to split water into hydronium and hydroxide ions. This research investigated the hypothesis that an electronically conducting material between the anion and cation exchange membranes can increase the rate of water splitting by increasing the electric field intensity in the mobile ion depleted region. Bipolar membranes were constructed with electronically conducting (graphene and carbon nanotubes) and electronically insulating (graphene oxide) interlayer materials of varying thickness. All three interlayer materials decreased the voltage required for water splitting compared to a bipolar membrane with no interlayer material. Quantum chemistry simulations were used to determine the catalytic effect of proton accepting and proton releasing sites on the three interlayer materials. Neither graphene nor carbon nanotubes had catalytic sites for water splitting. Thicker layers of graphene oxide resulted in decreased rates of water splitting at each applied potential. This effect can be attributed to a diminished electric field in the mobile ion depleted region with increasing catalyst layer thickness. In contrast, membrane performance with the electronically conducting graphene and carbon nanotube interlayers was independent of the interlayer thickness. An electrostatic model was used to show that interlayer electronic conductance can increase the electric field intensity in the mobile ion depleted region as compared to an electronically insulating material. Thus, including electronically conducting material in addition to a traditional catalyst may be a viable strategy for improving the performance of bipolar membranes.
  • The impact of upright radiographs of midshaft clavicle fractures on treatment recommendations

    Herman, Amir; Whitesell, Rebecca; Stewart, Rena L; Lowe, Jason A; Univ Arizona, Ctr Orthopaed Res & Educ (ACTA MEDICA BELGICA, 2019-09)
    Clavicle fractures' treatment recommendations are based on displacement. The goal of this paper is to determine upright clavicle radiographs at initial presentation changes timing and method of treatment. Retrospective study in a level 1 trauma center. 356 patients with clavicle fractures were reviewed. Patients with only supine radiographs (Group 1, 285 patients) were compared to patients with supine and upright radiographs (Group 2, 71 patients). Higher proportion of fractures in the upright vs supine radiographs were displaced 100% or more of the clavicle width, (52.1% vs. 33.5%, p =0.004). Treatment assignment changed from nonoperative to operative treatment more commonly in the Group 2 compared to Group 1 (43.7% vs 21.9%, p =0.019). The most common reason for surgery in Group 1 was presence of continued pain or failure to develop radiographic evidence of callus on serial radiographs (17, 53.1%) as compared to Group 2 (2, 14.2%, p =0.014). In Group 2 the most common cause for treatment change was displacement (12, 85.7%) as compared to Group 1 (15, 46.9%, p =0.014). Patients with upright x-rays are more likely to have a change in treatment because of displacement while patients that had supine x-rays have more delayed/nonunion.
  • An El Niño Mode in the Glacial Indian Ocean?

    Thirumalai, Kaustubh; DiNezio, Pedro N.; Tierney, Jessica E.; Puy, Martin; Mohtadi, Mahyar; Univ Arizona, Dept Geosci (AMER GEOPHYSICAL UNION, 2019-08)
    Despite minor variations in sea surface temperature (SST) compared to other tropical regions, coupled ocean-atmosphere dynamics in the Indian Ocean cause widespread drought, wildfires, and flooding. It is unclear whether changes in the Indian Ocean mean state can support stronger SST variability and climatic extremes. Here we focus on the Last Glacial Maximum (19,000-21,000 years before present) when background oceanic conditions could have been favorable for stronger variability. Using individual foraminiferal analyses and climate model simulations, we find that seasonal and interannual SST variations in the eastern equatorial Indian Ocean were much larger during this glacial period relative to modern conditions. The increase in year-to-year variance is consistent with the emergence of an equatorial mode of climate variability, which strongly resembles the Pacific El Nino and is currently not active in the Indian Ocean.

View more