Using Generic Telemetry Prognostic Algorithms for Launch Vehicle and Spacecraft Independent Failure Analysis Service
independent failure analysis
identification and recovery
MetadataShow full item record
RightsCopyright © held by the author; distribution rights International Foundation for Telemetering
Collection InformationProceedings from the International Telemetering Conference are made available by the International Foundation for Telemetering and the University of Arizona Libraries. Visit http://www.telemetry.org/index.php/contact-us if you have questions about items in this collection.
AbstractCurrent equipment and vehicle failure analysis practices use diagnostic technology developed over the past 100 years of designing and manufacturing electrical and mechanical equipment to identify root cause of equipment failure requiring expertise with the equipment under analysis. If the equipment that failed had telemetry embedded, prognostic algorithms can be used to identify the deterministic behavior in completely normal appearing data from fully functional equipment used for identifying which equipment will fail within 1 year of use, can also identify when the presence of deterministic behavior was initiated for any equipment failure.
SponsorsInternational Foundation for Telemetering
Showing items related by title, author, creator and subject.
Analysis of Failures of Decoders for LDPC CodesVasic, Bane; Chilappagari, Shashi Kiran; Vasic, Bane; Marcellin, Michael W.; Ryan, William E.; Lux, Klaus M. (The University of Arizona., 2008)Ever since the publication of Shannon's seminal work in 1948, the search for capacity achieving codes has led to many interesting discoveries in channel coding theory. Low-density parity-check (LDPC) codes originally proposed in 1963 were largely forgotten and rediscovered recently. The significance of LDPC codes lies in their capacity approaching performance even when decoded using low complexity sub-optimal decoding algorithms. Iterative decoders are one such class of decoders that work on a graphical representation of a code known as the Tanner graph. Their properties have been well understood in the asymptotic limit of the code length going to infinity. However, the behavior of various decoders for a given finite length code remains largely unknown.An understanding of the failures of the decoders is vital for the error floor analysis of a given code. Broadly speaking, error floor is the abrupt degradation in the frame error rate (FER) performance of a code in the high signal-to-noise ratio domain. Since the error floor phenomenon manifests in the regions not reachable by Monte-Carlo simulations, analytical methods are necessary for characterizing the decoding failures. In this work, we consider hard decision decoders for transmission over the binary symmetric channel (BSC).For column-weight-three codes, we provide tight upper and lower bounds on the guaranteed error correction capability of a code under the Gallager A algorithm by studying combinatorial objects known as trapping sets. For higher column weight codes, we establish bounds on the minimum number of variable nodes that achieve certain expansion as a function of the girth of the underlying Tanner graph, thereby obtaining lower bounds on the guaranteed error correction capability. We explore the relationship between a class of graphs known as cage graphs and trapping sets to establish upper bounds on the error correction capability.We also propose an algorithm to identify the most probable noise configurations, also known as instantons, that lead to error floor for linear programming (LP) decoding over the BSC. With the insight gained from the above analysis techniques, we propose novel code construction techniques that result in codes with superior error floor performance.
Evaluating the Effects of Heart Failure Clinic Enrollment on Hospital Admission and Readmission Rates: A Retrospective Data AnalysisShea, Kimberly D.; Veleta, Patricia M.; Gephart, Sheila M.; Buchner, Brian R.; Shea, Kimberly D. (The University of Arizona., 2016)Heart failure (HF) is a clinical syndrome associated with high morbidity and mortality with a large economic burden, and is the leading cause of hospitalizations among Medicare beneficiaries in the United States. Healthcare reform has focused on strategies to reduce HF readmissions, including outpatient HF clinics. Purpose: The purpose of this DNP Project was to answer the following question: In adult patients diagnosed with HF, how does enrollment in the HF clinic, compared to non-enrollment affect hospital admission and readmission rates? Methods: A retrospective analysis of 767 unique patients and their 1,014 respective admissions and readmissions was conducted. Continuous and categorical data was analyzed and presented as a mean (M), standard deviation (SD), absolute number (N) and percentage (%). A Pearson Chi Square test was used for categorical variables and Analysis of Variance was used for age and ejection fraction (EF). Results: Study sample demographics (N=767); age (M=79.72, SD=7.48); gender (57.6 % male) and EF (M=0.43, SD=0.16) were evaluated. The No HF clinic (No HFC) and HF clinic (HFC) enrollment groups (N=573) were compared for age (M=79.49, SD=7.65) (M=80.39, SD=6.94), male gender (54.6%, 66.5%) and EF (M= 0.44, SD=0.17) (M=0.42, SD=0.15), respectively. Each sample patient had at least one admission for HF during 2015; of which 573 (46.2%) were in the No HFC group and 194 (8.4%) were in the HFC group (p<0.001). There was no difference in all-cause readmissions between the No HFC group [n=95(14.5%)] and the HFC group [n=37(16.2%)] (p=0.534) and no difference in HF-related readmissions between the No HFC group [n=72(11.0%)] and the HFC group [n=23(10.0%)] (p=0.700). Conclusions: This DNP project demonstrated a significant difference in HF admission rates in favor of the HFC group. While no differences were found in all-cause or HF-related readmission rates in No HFC and HFC groups, the rates are less than the national average. Unintended findings were that datasets can be very poorly constructed and populated, resulting in large amounts of unusable data. Recommendations are for more rigor in the organization of datasets to assure accurate comparisons between admission and readmission rates based on enrollment in HF clinics.