• Analysis and Implementation of Resource Efficient Probabilistic Gallager B LDPC Decoder

      Unal, Burak; Ghaffari, Fakhreddine; Akoglu, Ali; Declercq, David; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017-08)
      Low-Density-Parity-Check (LDPC) codes have gained popularity in communication systems and standards due to their capacity-approaching error-correction performance. In this paper, we first expose the tradeoff between decoding performance and hardware performance across three LDPC hard-decision decoding algorithms: Gallager B (GaB), Gradient Descent Bit Flipping (GDBF), and Probabilistic Gradient Descent Bit Flipping (PGDBF). We show that GaB architecture delivers the best throughput while using fewest Field Programmable Gate Array (FPGA) resources, however performs the worst in terms of decoding performance. We then modify the GaB architecture, introduce a new Probabilistic stimulation function (PGaB), and achieve dramatic decoding performance improvement over the GaB, exceeding the performance of GDBF, without sacrificing its superior maximum operating frequency.
    • Analysis of the Electromagnetic Field Generated by Deep Brain Stimulation in Patients with Parkinson's Disease

      Greger, Bradley; Kiraly, Alexis; Guest, Ashley; Graham, Dakota; Muthuswamy, Jitendran; Ponce, Francisco; College of Medicine-Phoenix, University of Arizona (IEEE, 2021-05-04)
      Deep Brain Stimulation (DBS) is a stimulating therapy currently used to treat the motor disabilities that occur as a result of Parkinson's disease (PD). The mechanism of how DBS treats PD is poorly understood. Currently, there is a paucity of data from in-vivo human studies on the electromagnetic field (EMF) generated within neural tissue by DBS. In this study, the EMF generated by DBS was analyzed at different distances from the stimulating electrodes. Our goal was to examine how the EMF strength changed with distance in the human brain. The resulting analysis demonstrated differences of several orders of magnitude across the distances measured. With further study, we aim to connect the EMF effect on neural structures to the efficacy of DBS treatment.
    • Applicability of single- and two-hidden-layer neural networks in decoding linear block codes

      Brkic, Srdan; Ivanis, Predrag; Vasic, Bane; University of Arizona, Department of Electrical and Computer Engineering (IEEE, 2021-11-23)
      In this paper, we analyze applicability of single- and two-hidden-layer feed-forward artificial neural networks, SLFNs and TLFNs, respectively, in decoding linear block codes. Based on the provable capability of SLFNs and TLFNs to approximate discrete functions, we discuss sizes of the network capable to perform maximum likelihood decoding. Furthermore, we propose a decoding scheme, which use artificial neural networks (ANNs) to lower the error-floors of low-density parity-check (LDPC) codes. By learning a small number of error patterns, uncorrectable with typical decoders of LDPC codes, ANN can lower the error-floor by an order of magnitude, with only marginal average complexity incense.
    • Asymptotic Error Probability of the Gallager B Decoder Under Timing Errors

      Dupraz, Elsa; Declercq, David; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017-01-04)
      In a circuit, timing errors occur when a logic gate output does not switch before the clock rising edge. In this letter, we consider Gallager B decoders under timing errors, following the error model derived by Amaricai et al. from SPICE measurements. For this model, we provide a theoretical analysis of the performance of LDPC decoders. This letter is based on the analysis of the computation trees of the decoder free of logic gate errors and of the decoder with timing errors. As a main result, we show that as the number of iterations goes to infinity, the error probability of the decoder with timing errors converges to the error probability of the logic gate error-free decoder. Monte Carlo simulations confirm this result even for moderate code lengths, which is in accordance with the experimental observations.
    • Asynchronous Execution of Python Code on Task-Based Runtime Systems

      Tohid, R.; Wagle, Bibek; Shirzad, Shahrzad; Diehl, Patrick; Serio, Adrian; Kheirkhahan, Alireza; Amini, Parsa; Williams, Katy; Isaacs, Kate; Huck, Kevin; et al. (IEEE, 2018-11)
      Despite advancements in the areas of parallel and distributed computing, the complexity of programming on High Performance Computing (HPC) resources has deterred many domain experts, especially in the areas of machine learning and artificial intelligence (AI), from utilizing performance benefits of such systems. Researchers and scientists favor high-productivity languages to avoid the inconvenience of programming in low-level languages and costs of acquiring the necessary skills required for programming at this level. In recent years, Python, with the support of linear algebra libraries like NumPy, has gained popularity despite facing limitations which prevent this code from distributed runs. Here we present a solution which maintains both high level programming abstractions as well as parallel and distributed efficiency. Phylanx, is an asynchronous array processing toolkit which transforms Python and NumPy operations into code which can be executed in parallel on HPC resources by mapping Python and NumPy functions and variables into a dependency tree executed by HPX, a general purpose, parallel, task-based runtime system written in C++. Phylanx additionally provides introspection and visualization capabilities for debugging and performance analysis. We have tested the foundations of our approach by comparing our implementation of widely used machine learning algorithms to accepted NumPy standards.
    • Automated analysis of interactional synchrony using robust facial tracking and expression recognition

      Yu, Xiang; Zhang, Shaoting; Yu, Yang; Dunbar, Norah; Jensen, Matthew; Burgoon, Judee K.; Metaxas, Dimitris N. (IEEE, 2013-04)
      In this paper, we propose an automated, data-driven and unobtrusive framework to analyze interactional synchrony. We use this information to determine whether interpersonal synchrony can be an indicator of deceit. Our framework includes a robust facial tracking module, an effective expression recognition method, synchrony feature extraction and feature selection methods. These synchrony features are used to learn classification models for the deception recognition. To evaluate our proposed framework, we have conducted extensive experiments on a database of 242 video samples. We validate the performance of each technical module in our framework, and also show that these synchrony features are very effective at detecting deception.
    • Automatic Detection of Everyday Social Behaviours and Environments from Verbatim Transcripts of Daily Conversations

      Yordanova, Kristina Y.; Demiray, Burcu; Mehl, Matthias R.; Martin, Mike; Univ Arizona, Dept Psychol (IEEE, 2019-03)
      Coding in social sciences is a process that involves the categorisation of qualitative or quantitative data in order to facilitate further analysis. Coding is usually a manual process that involves a lot of effort and time to produce codes with high validity and interrater reliability. Although automated methods for quantitative data analysis are largely used in social sciences, there are only a few attempts at automatically or semi-automatically coding the data collected in qualitative studies. To address this problem, in this work we propose an approach for automated coding of social behaviours and environments based on verbatim transcriptions of everyday conversations. To evaluate the approach, we analysed the transcripts from three datasets containing recordings of everyday conversations from: (1) young healthy adults (German transcriptions), (2) elderly healthy adults (German transcriptions), and (3) young healthy adults (English transcriptions). The results show that it is possible to automatically code the social behaviours and environments based on verbatim transcripts of the recorded conversations. This could reduce the time and effort researchers need to assign accurate codes to transcribed conversations.
    • Automating Wavefront Parallelization for Sparse Matrix Computations

      Venkat, Anand; Mohammadi, Mahdi Soltan; Park, Jongsoo; Rong, Hongbo; Barik, Rajkishore; Strout, Michelle Mills; Hall, Mary; Univ Arizona, Dept Comp Sci (IEEE, 2016)
      This paper presents a compiler and runtime framework for parallelizing sparse matrix computations that have loop-carried dependences. Our approach automatically generates a runtime inspector to collect data dependence information and achieves wavefront parallelization of the computation, where iterations within a wavefront execute in parallel, and synchronization is required across wavefronts. A key contribution of this paper involves dependence simplification, which reduces the time and space overhead of the inspector. This is implemented within a polyhedral compiler framework, extended for sparse matrix codes. Results demonstrate the feasibility of using automatically-generated inspectors and executors to optimize ILU factorization and symmetric Gauss-Seidel relaxations, which are part of the Preconditioned Conjugate Gradient (PCG) computation. Our implementation achieves a median speedup of 2.97x on 12 cores over the reference sequential PCG implementation, significantly outperforms PCG parallelized using Intel's Math Kernel Library (MKL), and is within 6% of the median performance of manually-parallelized PCG.
    • Average Worst-Case Secrecy Rate Maximization via UAV and Base Station Resource Allocation

      Ahmed, Shakil; Bash, Boulat A.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2019-09)
      In this paper, we consider a wireless network setting where a base station (BS) employs a single unmanned aerial vehicle (UAV) mobile relay to disseminate information to multiple users in the presence of multiple adversaries. The BS, which is on the ground, has no direct link to the users or the adversaries, who are also on the ground. We optimize the joint transmit power of the BS and the UAV, and the UAV trajectory. We introduce the information causality constraint and maximize the average worst-case secrecy rate in the presence of the adversaries. The formulated average worst-case secrecy rate optimization problem is not convex and is solved sub-optimally. First, we optimize the transmit power of the BS and the UAV under a given UAV trajectory. Then, we optimize the UAV trajectory under the sub-optimal UAV and BS transmit power. An efficient algorithm solves the average worst-case secrecy rate maximization problem iteratively until it converges. Finally, simulation results are provided, which demonstrate the correspondence of the UAV optimal track and transmit power allocation to what is suggested by the previous theoretic results.
    • Broad-Time-Horizon Solar Power Prediction and PV Performance Degradation Research at the University of Arizona

      Potter, B.G.; Simmons-Potter, Kelly; Holmgren, William F.; Univ Arizona (IEEE, 2018)
      An overview of University of Arizona cooperative research efforts towards enhanced solar power prediction over PV system operational lifetime is provided. Integration of established research programs in power forecasting (including irradiance and irradiance-to-power modeling) and in device and system-level performance degradation studies offer new opportunities to address solar power prediction needs over short and long-term time horizons.
    • Channel Coding for Optical Transmission Systems

      Djordjevic, Ivan B.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017)
      In this invited paper, both binary and nonbinary LDPC codes suitable for optical transmission systems are described. The corresponding FPGA implementation has been discussed as well. The use of adaptive LDPC coding to deal with time-varying optical channel conditions is described as well.
    • Characterization of a Laser-Induced Plasma Using Time-Resolved Dual-Frequency-Comb Spectroscopy

      Zhang, Yu; Lecaplain, Caroline; Weeks, Reagan R. D.; Yeak, Jeremy; Harilal, Sivanandan S.; Phillips, Mark C.; Jones, R. Jason; Univ Arizona, Dept Phys; Univ Arizona, Coll Opt Sci (IEEE, 2019)
      We characterize the dynamics of laser-induced plasmas using time-resolved dual-frequency-comb spectroscopy. The temporal evolution of plasma's temperature, population number density are estimated for multiple Fe transitions. (C) 2019 The Author(s)
    • Characterizing A549 Cell Line as an Epithelial Cell Monolayer Model for Pharmacokinetic Applications

      Frost, Timothy S.; Jiang, Linan; Zohar, Yitshak; Univ Arizona (IEEE, 2018)
      Transport of three different molecules across a porous membrane, with and without a confluent A549 cell monolayer, has been investigated in Transwells and microfluidic devices. The A549 cell line was selected since it has extensively been utilized in toxicology studies due to its potential target for drug delivery of macro molecules. The measured molecular transport rate was found to decrease with increasing molecular size due to lower diffusivity. The confluent cell monolayer presents a barrier to molecular, significantly reducing the transport rate of larger molecules with little effect on the paracellular transport of smaller molecules. The results indicate that the microfluidic system is a good model for pharmacokinetic applications.
    • Cleared for launch — Lessons learned from the OSIRIS-REx system requirements verification program

      Stevens, Craig; Williams, Bradley; Adams, Angela; Goodloe, Colby; Univ Arizona (IEEE, 2017)
      Requirements verification of a large flight system is a challenge. This paper describes the approach to verification of the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) system requirements. It also captures lessons learned along the way from the systems engineers embroiled in this process. This paper begins with an overview of the mission and science objectives as well as the project requirements verification program strategy. A description of the requirements flow down is presented including an implementation for managing the thousands of program and element level requirements and associated verification data. This paper discusses both successes and methods to improve the managing of these data across multiple organizational interfaces. The teams risk-based approach to verifying system requirements at multiple levels of assembly is presented using examples from work at instrument, spacecraft, and ground segment levels. A discussion of system end-to-end testing limitations and their impacts to the verification program is included. Finally, this paper describes lessons learned during the execution of the verification program across multiple government and commercial organizations. These lessons and perspectives can be valuable to all space system engineers developing a large NASA space mission.
    • CLEO®/Europe-EQEC 2021 Penrose wave amplification in superfluids of light

      Braidotti, Maria Chiara; Prizia, Radivoje; Wright, Ewan M.; Faccio, Daniele; Wyant College of Optical Sciences, University of Arizona (IEEE, 2021-06-21)
      The amplification of waves in the scattering with rotating black holes is a fundamental process in gravitational physics. It was introduced by Roger Penrose in 1969 as a way to extract energy from these astrophysical objects [1]. Soon after Penrose proposal, in 1971 Zel'dovich predicted a similar superradiant amplification for electromagnetic waves scattered by a metallic rotating cylinder [2]. In Zel'dovich proposal, waves get amplified if their angular frequency ω satisfies the condition ω < m Ω, where m is the waves topological charge and Ω is the cylinder angular velocity. Despite this amplification process is ubiquitous in wave scattering physics, and not limited to electromagnetic waves and astrophysics only, direct measurements are limited to an experiment with waves in a water vortex in a draining bathtub experiment [3]. The related, but not identical, Zel'dovich amplification has been observed for acoustic waves [4].
    • Closed Systems Paradigm for Intelligent Systems

      Shadab, Niloofar; Cody, Tyler; Salado, Alejandro; Beling, Peter; University of Arizona, Department of Systems and Industrial Engineering (IEEE, 2022-04-25)
      Intelligent systems ought to be distinguished as a special type of system. While some adopt this view informally, in practice, systems engineering methods for intelligent systems are still centered around traditional systems engineering notions of engineering by aggregation of components. We posit that this traditional approach follows from holding a notion of open systems as the fundamental precept, and that engineering intelligent systems, in contrast, requires an approach that holds notions of closed systems as fundamental precepts. We take a systems theoretic approach to defining closed system phenomena and their relation to engineering intelligence. We propose the concept of variety; particularly the law of requisite variety to enable closed view in engineering. We discuss how open and closed view approaches to engineering intelligent systems address variety differently, as well as the implications of this difference on engineering practice.
    • Cluster States-based Quantum Networks

      Djordjevic, Ivan B.; University of Arizona, Department of Electrical and Computer Eng. (IEEE, 2020-09)
      We propose to implement multipartite quantum communication network (QCN) by employing the cluster-state-based concept. The proposed QCN can be used to: (i) perform distributed quantum computing, (ii) teleport quantum states between any two nodes in QCN, and (iii) enable next generation of cyber security systems. © 2020 IEEE.
    • Clustering Regression Wavelet Analysis for Lossless Compression of Hyperspectral Imagery

      Ahanonu, Eze; Marcellin, Michael; Bilgin, Ali; Univ Arizona, Dept Elect & Comp Engn; Univ Arizona, Dept Biomed Engn (IEEE, 2019)
      Recently, Regression Wavelet Analysis (RWA) was proposed as a method for lossless compression of hyperspectral images. In RWA, a linear regression is performed after a spectral wavelet transform to generate predictors which estimate the detail coefficients from approximation coefficients at each scale of the spectral wavelet transform. In this work, we propose Clustering Regression Wavelet Analysis (RWA-C), an extension of the original ‘Restricted’ RWA model which may be used to improve compression performance while maintaining component scalability. We demonstrate that clustering may be used to group pixels with similar spectral profiles, these clusters may then be more efficiently processed to improve RWA prediction performance while only requiring a modest increase side-information.
    • Coding Scheme for the Transmission of Satellite Imagery

      Auli-Llinas, Francesc; Marcellin, Michael W.; Sanchez, Victor; Serra-Sagrista, Joan; Bartrina-Rapesta, Joan; Blanes, Ian; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2016-03)
      The coding and transmission of the massive datasets captured by Earth Observation (EO) satellites is a critical issue in current missions. The conventional approach is to use compression on board the satellite to reduce the size of the captured images. This strategy exploits spatial and/or spectral redundancy to achieve compression. Another type of redundancy found in such data is the temporal redundancy between images of the same area that are captured at different instants of time. This type of redundancy is commonly not exploited because the required data and computing power are not available on board the satellite. This paper introduces a coding scheme for EO satellites able to exploit this redundancy. Contrary to traditional approaches, the proposed scheme employs both the downlink and the uplink of the satellite. Its main insight is to compute and code the temporal redundancy on the ground and transmit it to the satellite via the uplink. The satellite then uses this information to compress more efficiently the captured image. Experimental results for Landsat 8 images indicate that the proposed dual link image coding scheme can achieve higher coding performance than traditional systems for both lossless and lossy regimes.
    • Combinatorial Constructions of Low-Density Parity-Check Codes for Iterative Decoding

      Vasic, Bane; Milenkovic, O.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2004-06-01)
      This paper introduces several new combinatorial constructions of low-density parity-check (LDPC) codes, in contrast to the prevalent practice of using long, random-like codes. The proposed codes are well structured, and unlike random codes can lend themselves to a very low-complexity implementation. Constructions of regular Gallager codes based on cyclic difference families, cycle-invariant difference sets, and affine 1-configurations are introduced. Several constructions of difference families used for code design are presented, as well as bounds on the minimal distance of the codes based on the concept of a generalized Pasch configuration.