• Analysis of Field Delivered Therapy for Chlamydia and Gonorrhea in Maricopa County

      Ebbing, Brittany; The University of Arizona College of Medicine - Phoenix; Taylor, Melanie (The University of Arizona, 2017-05-08)
      Chlamydia and gonorrhea are among the most frequently reported infectious diseases in the United States. These two diseases are easily treated with antibiotics; however, challenges exist in providing treatment to cases and their sexual partners. Maricopa County implemented a Field Delivered Therapy (FDT) protocol to treat chlamydia and gonorrhea cases and contacts in 2009. Ultimately, this project sought to inform other public health departments across the United States regarding the benefits of FDT program to treat gonorrhea and chlamydia and provide better insight on how to treat the two most commonly reported infectious diseases. Existing data was analyzed from April 1, 2011 to October 31, 2014 (42 months) for all patients that received FDT in Maricopa County utilizing pharmacy records and electronic health records (PRISM and eClinicalWorks). The following pieces of information were collected from these data sources: gender, age, race/ethnicity, diagnosis, number of partners, and time to treatment. The data were then divided into four FDT groups (FDT, expedited partner therapy via FDT, FDT attempted and FDT planned). There were 172 patients in this analysis; 140 diagnosed or in contact with chlamydia and 16 diagnosed or in contact with gonorrhea. There were 79 patients (45.9%) in the FDT group, 28 (16.3%) in the FDT EPT group, 28 (16.3%) in the FDT attempted and 37 (21.5%) in the FDT planned group. The median age of these patients was 23.8 (range 16.6‐31); 111 (64.5%) were female. The median time to treatment for these patients was 24.6 days (range 0‐64.5 days). Most patients (79.6%) lived outside of central Phoenix. The median number of sexual partners reported by these patients was 6.6 (range 1‐19.7 partners). A majority of the patients were <25 years old, except for in the FDT EPT group where 100% of patients were >25 years old. And the group with the largest <19‐year‐old population (32%) was in the FDT group. All the groups had a female majority, except in the FDT EPT group where 75% of the patients were male. Most patients in the FDT only group received testing at an outside hospital or outpatient clinic, while the FDT attempted and planned were more often tested at the STD clinic. Future Direction/Conclusion Many of the patients that received FDT are young women, some pregnant, that lived outside of Central Phoenix. However, a majority of the overall clients that received expedited partner therapy via FDT were male, a typically hard to reach population for treatment of potentially asymptomatic infections. This study demonstrates an effective method of delivering partner treatment to men. This study can be used to inform other public health departments about this novel practice and to help Maricopa County grow their FDT program to reach even more untreated patients.
    • Assessment of Scholarly Project Requirements at U.S. Allopathic Medical Schools

      Wypiszynski, Sarah; The University of Arizona College of Medicine - Phoenix; McEchron, Matthew (The University of Arizona., 2017-05-25)
      Over 100 years after the Flexner Report first revolutionized medical education, medical schools across the United States are rethinking the role of scholarly research in their curricula. Scholarly research helps fulfill a number of essential elements of the medical school curriculum. The Scholarly Project (SP) engenders self‐directed independent learning, critical thinking skills, writing skills, life‐long learning, and many other objectives. The SP also allows students to assess evidence and the credibility of sources. According to a 2010 study, the Association of American Medical Colleges (AAMC) Curriculum Directory listed 84 medical schools with required research and 9 schools with a required thesis. This research requirement can take on many forms, some of which have been outlined for specific medical schools. Since then, more schools have embraced SP’s in their curricula, and the SP requirements and objectives have evolved dramatically at many U.S. medical schools. This project aims to (1) identify which U.S. allopathic medical schools have required and elective SP’s, (2) determine the components of these SP’s with respect to the duration and placement within the four‐year curriculum, the types of projects that qualify as SPs, the capstone requirement for the finished SP product, the curricular elements, and the objectives of the SP, and (3) determine how many schools have a required, four‐year longitudinal, hypothesis‐driven SP that culminate in a manuscript or thesis. The 136 allopathic medical schools on the AAMC Application Service website as of September 4, 2014 were included in this research. The individual website of each school was queried to attempt to determine the presence and characteristics of a formal SP within the curriculum. Each school was then contacted with the information that was found from the initial query in order to verify and/or elaborate on the preliminary results. Each SP was analyzed to determine (1) whether it was required or optional, (2) its duration and placement within the 4‐year curriculum, (3) the capstone requirement, (4) whether the research was required to be hypothesis‐driven, (5) the topic areas available for students, (6) whether there was formal curriculum in scholarly pursuit within the general medical curriculum, and (7) what the program objectives were. A total of 136 medical schools were surveyed in this study. Our analysis revealed that 78 of these schools include some structured SP in their curricula. Of these, 48 SPs are required, and 30 are optional. The majority of these SPs (36) require less than 1 year for completion. A total of 48 of the 78 medical schools had a manuscript or thesis requirement for the final capstone. Of the 48 schools with a required SP, 25 required the research to be hypothesis driven. A total of 43 of the 78 schools included required scholarship/research curricula as part of the overall medical education curriculum. The objectives of the programs are described in detail in this study. This study identified four medical schools with a required, 4‐year longitudinal, hypothesis‐driven SP that culminates with production of a manuscript or thesis. The four allopathic medical schools with a required, 4‐year longitudinal, hypothesisdriven SP that culminates in a manuscript/thesis are as follows: the Albert Einstein College of Medicine of Yeshiva University, the University of Arizona College of Medicine‐ Phoenix, the Virginia Tech Carilion School of Medicine and Research Institute, and Yale University. The details of each program are explored in the text.
    • Assessment of the Analgesic Efficacy of Intravenous Ibuprofen in Biliary Colic

      Zurcher, Kenneth; The University of Arizona College of Medicine - Phoenix; Quan, Dan (The University of Arizona., 2017-05-22)
      It is estimated over 20 million people aged 20‐74 have gallbladder disease, with biliary colic being a common and painful symptom in these patients. Likely due to the relatively recent approval of intravenous ibuprofen use for fever and pain in adults, no assessment of its analgesic efficacy for biliary colic currently exists in the literature. In this double‐blind, randomized, controlled trial we aim to assess the analgesic efficacy of intravenous (IV) ibuprofen given in the emergency department (ED) for the treatment of biliary colic. Analgesic efficacy was evaluated using a visual analog scale (VAS) to assess for a decrease in pain scores. A VAS score decrease of 33% in relation to the VAS taken at the time of therapy drug administration was considered a minimum clinically important difference (MCID) in patient‐perceived pain. A VAS was administered in triage upon enrollment, at the time of therapy administration, at 15‐minute intervals during the first hour post‐administration, and 30‐minute intervals in the second hour. As the standard of care for suspected biliary colic at the study institution is administration of a one‐time dose of IV morphine, patients were not denied initial morphine analgesia and were permitted to receive “rescue” morphine analgesia at any point during their ED course. A total of 22 patients completed the study. 9 were randomized to the IV ibuprofen arm, 9 to placebo, and 4 were excluded for a diagnosis other than biliary colic. Mean VAS values at time 0 to time 120 decreased from 5.78 to 2.31 in the ibuprofen group, and from 5.89 to 2.67 in the control group. There was no statistically significant difference in treatment status of ibuprofen vs. placebo (p‐value (p.) 0.93), though there was a significant decrease in the measured VAS scores over time (0 minutes to 120 minutes, p.0.031) in both ibuprofen and placebo groups. A statistically significant and clinically important decrease in average VAS scores were seen in both placebo and ibuprofen groups (55% and 60%, respectively). There was no difference in time needed to achieve a clinically significant reduction in pain between groups. The sample size of this study may be inadequate to fully assess the analgesic efficacy of IV ibuprofen for biliary colic. In the analysis group (n=18) no significant difference in treatment status of ibuprofen vs. placebo was seen, however there was a statistically and clinically significant decrease in pain in both groups. Two potential confounding factors may have affected the trial’s results: administration of standard‐of‐care IV morphine following initial triage assessment, and the inherent episodic and self‐limited nature of biliary colic.
    • Celiac Disease in the Hispanic Population at Maricopa Integrated Health System

      Massimo, Lauren; The University of Arizona College of Medicine - Phoenix; Chuang, Keng‐Yu (The University of Arizona., 2017-05-23)
      Celiac disease (CD) is an autoimmune gastrointestinal disorder that has been well studied amongst non‐Hispanic white populations. Data specifically describing the disease in the U.S. Hispanic population is limited and available studies that do report prevalence and incidence within this population reveal discrepancies. The aim of this study is to estimate the incidence of CD and to define common presenting symptoms in Hispanics in Phoenix, AZ. Data was collected via a retrospective chart review from Maricopa Integrated Health System (MIHS), an organization caring for a patient population that is >50% Hispanic, between 2004‐2013. The study population is both adult and pediatric patients that had received the ICD‐9 code 579.0. The total number of non‐repeat patients seen at MIHS each year between 2004‐2013 was also determined and broken down by race for incidence calculations. During this 10‐year period, 29 total patients were diagnosed with CD at MIHS. The overall yearly incidence increased from 1 in 44,011 patients in 2004 to 1 in 27,948 in 2013. Of the 29 diagnosed, 52% were Caucasian, 34% Hispanic, 7% Asian and 7% African American. The yearly incidence in Hispanic patients also increased from 0 in 2004 to 1 in 58,302 in 2007 to 1 in 25,826 in 2013. Although diagnosis was greater in females of both races, Hispanic patients were diagnosed at a younger age than Caucasians (22 vs. 31 y/o, respectively). The most common diagnostic approach was serological testing combined with duodenal biopsy. The 3 most common gastrointestinal presenting symptoms in Caucasians were diarrhea, abdominal pain and nausea/vomiting, while those in Hispanics were constipation, bloating/abdominal distention and diarrhea. At the time of diagnosis, at least 1/3rd of both Caucasian and Hispanic patients had presented with another autoimmune disorder. Other associated conditions were neurological symptoms and iron‐deficiency anemia. Data from this study suggests that CD in the Hispanic population may be more common in Phoenix than the overall population in the U.S. as described in the literature. It also suggests that Hispanic patients may have different presenting symptoms than do Caucasians. The reason behind the increase in CD incidence in Hispanics is unclear, although increased physician awareness and diagnosis may play a role. Further research and awareness of CD in the Hispanic population may be necessary to optimize diagnosis & treatment of the condition.
    • Clinical Correlates of the Alzheimer's Questionnaire

      Budolfson, Katie; The University of Arizona College of Medicine - Phoenix; Sabbagh, Marwan (The University of Arizona., 2017-04-24)
      Informant‐based assessments of cognition and function are commonly used to differentiate individuals with amnestic mild cognitive impairment (aMCI) and Alzheimer’s disease (AD) from those who are cognitively normal (CN). However, determining the extent to which informant‐based measures correlate to objective neuropsychological tests is important given the widespread use of neuropsychological tests in making clinical diagnoses of aMCI and AD. The aim of the current study is to determine how well the Alzheimer’s Questionnaire (AQ) correlates with objective neuropsychological tests. Results showed that the AQ correlated strongly with the Mini Mental State Exam (r = ‐0.71) and the Mattis Dementia Rating Scale‐2 (r = ‐0.72), and moderate correlations were noted for the AQ with memory function (Rey Auditory Verbal Learning Test Delayed Recall, r = ‐0.61) and executive function (Trails B, r = 0.53). The AQ also correlated moderately with language function (Boston Naming Test 30‐Item, r = ‐0.44), but showed a weak correlation with visuospatial function (Judgment of Line Orientation, r = ‐0.28). The AQ also correlates particularly well with cognitive screens, showing the strongest correlations with the MMSE (r = ‐0.71) and the DRS‐2 (r = ‐0.72). The findings of this study suggest that the AQ correlates well with several neuropsychological tests, particularly those that assess the domains memory and executive function. These results lend further support to the validity of the AQ as a screening instrument for cognitive impairment as it correlates well with neuropsychological measures used to make clinical diagnoses of aMCI and AD.e sites become involved, thus providing significant feedback for possible course revision.
    • Clinical Indicators that Predict Readmission Risk in Patients with Acute Myocardial Infarction, Heart Failure, and Pneumonia

      Chen, Weihua; The University of Arizona College of Medicine - Phoenix; Antonescu, Corneliu; Holland, William (The University of Arizona., 2017-04-28)
      BACKGROUND: In order to improve the quality and efficacy of healthcare while reducing the overall cost to deliver that healthcare, it has become increasingly important to manage utilization of services for populations of patients. Healthcare systems are aggressively working to identify patients at risk for hospital readmissions. Although readmission rates have been studied before, parameters for identifying patients at risk for readmission appear to vary depending the patient population. We will examine existing Electronic Health Record (EHR) data at Banner Health to establish what parameters are clinical indicators for readmission risk. Three conditions were identified by the CMS to have high and costly readmissions rates; heart failure (HF), acute myocardial infarction (AMI), and pneumonia. This study will focus on attempting to determine the primary predictive variables for these three conditions in order to have maximum impact on cost savings. METHODS: A literature review was done and 68 possible risk variables were identified. Of these, 30 of the variables were identifiable within the EHR system. Inclusion criteria for individual patient records are that they had an index admission secondary to AMI, heart failure, or pneumonia and that they had a subsequent readmission within 30 days of the index admission. Pediatric populations were not studied since they have unique factors for readmission that are not generalizable. Logistics regression was applied to all data including data with missing data rows. This allowed all coefficients to be interpreted for significance. This model was termed the full model. Variables that were determined to be insignificant were subsequently removed to create a new reduced model. Chi square testing was then done to compare the reduced model to the full model to determine if any significant differences existed between the two. RESULTS: Several variables were determined to be the significant predictors of readmission. The final reduced model had 19 predictors. When analyzed using ROC analysis, the area under the curve (AUC) was 0.64. CONCLUSION: Several variables were identified that could be significant contributors to readmission risk. The final model had an AUC on it ROC of 0.64 suggesting that it would only have poor to moderate clinical value for predicting readmission.
    • Clinical Symptoms and Modified Barium Swallow (MBS) Score in Evaluation of Pediatric Patients with Dysphagia and Aspiration

      Monks, Sarah; The University of Arizona College of Medicine - Phoenix; Williams, Dana (The University of Arizona., 2017-05-12)
      Dysphagia with aspiration (DA) is the most common presenting symptom of patients at Phoenix Children’s Hospital’s Aerodigestive Clinic (ADC). Dysphagia with aspiration is associated with respiratory and gastrointestinal symptoms, chronic oral thickener use to prevent aspiration, secondary constipation, and occasionally, enteral tube dependency. MBS is considered the gold standard in instrumental assessment of dysphagia; it is used to evaluate severity and guide thickener treatment of DA patients, monitor progress with serial studies, and for re‐evaluation after intervention when appropriate. Previous evaluation of patients with deep interarytenoid notch given laryngoplasty injection included patients with improvement in symptoms despite post‐intervention MBS scores worsening, and vice versa, challenging the use of MBS as a longitudinal tool in clinical evaluation of patients with dysphagia and aspiration. Is MBS severity score reflective of clinical symptoms in pediatric patients with dysphagia and aspiration? A clinical questionnaire of DA symptoms was developed with input from the ADC physicians. The questionnaire was administered over 3 months to patients aged 1‐3 years who had an MBS evaluation within 6 months of their initial ADC visit, standard of care for patients with DA. 17 symptoms (12 GI and 5 pulmonary) were given a numerical score 0‐4 based on parent recall of frequency. MBS was scored 1‐10 on the thickness of liquid recommended for aspiration prevention. Individual symptoms and symptom sets (total questionnaire score, GI score, pulmonary score) were compared to MBS score using linear regression model. 30 patients were surveyed with median MBS score of 6 and range from 0 to 8. 18 patients had an MBS score above 6. Median questionnaire score was 18, with a range from 4 to 53. All analysis showed no significant correlation between individual symptoms or symptom sets and MBS score; the highest R2 value for any individual symptoms was 0.05. Among ADC patients with DA, MBS severity score did not correlate with severity or specificity of symptoms, questioning the use of MBS as a tool for diagnosing severity of persistent DA or as a repetitive tool in assessing response to laryngeal cleft surgical interventions and thickener wean therapy. These findings challenge the use of repetitive MBS in the ADC patient population. Our ultimate goal is to develop a combined clinical and radiologic tool that would minimize radiation exposure and unnecessary thickener treatment while promoting best clinical outcomes.
    • Comparing Different Forms of Childhood Maltreatment as Risk Factors for Adult Cardiovascular Disease and Depression

      Panchanathan, Amritha; The University of Arizona College of Medicine - Phoenix; Caldwell, Jon G. (The University of Arizona., 2017-05-23)
      Research has shown an association between childhood maltreatment and risk factors for cardiovascular disease and depression. The purpose of this study is to examine the total and unique effects of various forms of childhood maltreatment on the development of risk factors for cardiovascular disease and depression in both women and men. Data for this study will be obtained from retrospective chart review and from an already established research database at a private healthcare facility specializing in the treatment of trauma and addiction. All information will pertain to participants’ admission to the healthcare facility and will include self‐report data on childhood maltreatment and symptoms of depression, as well as retrospective chart review data regarding physiological metrics of risk for cardiovascular disease (blood pressure, cholesterol, diabetes). Results from 290 patients indicated that emotional abuse and emotional neglect were the leading predictors of negative outcomes with emotional neglect being a significant predictor of adult depression even after controlling for age, gender, and marital status. Younger participants and women reported higher levels of depression. However, the gender‐specific regressions showed that younger age and emotional neglect remained significant predictors of depression, with the percent variance explained by the model being greater among men compared to women. This greater effect size among men was driven by a stronger association between younger age and depression in men than in women. Childhood emotional abuse was associated with greater risk for coronary heart disease, even after controlling for gender and marital status. Gender‐specific analyses showed that, for men, childhood physical neglect emerged as a significant predictor of coronary heart disease risk after controlling for marital status. Contrary to predictions, among women, none of the five types of childhood maltreatment emerged as a significant predictor of coronary heart disease risk. Moreover, depression was inversely associated with risk for coronary heart disease. In other words, higher levels of depression were consistently associated with lower levels of coronary heart disease risk. This was attributed to the fact that younger people reported higher levels of depression, but younger age was also associated with lower levels of coronary heart disease risk. Furthermore, the results of this study can be used to develop screening tools, based on childhood maltreatment severity and type, for depression and cardiovascular disease. To what degree are specific types of childhood abuse and neglect (i.e., emotional, physical, or sexual) risk factors for depression and cardiovascular disease and how are these risks moderated by gender? Hypotheses: 1) It is expected that higher levels of childhood neglect and abuse (all forms taken together) will be related to higher levels of depressive symptoms and greater risk for cardiovascular disease. 2) Comparing five basic forms of neglect and abuse, it is anticipated that emotional abuse will have the strongest association with elevations in depression and cardiovascular risk. 3) It is hypothesized that the relation between childhood maltreatment and cardiovascular risk will be stronger in women compared to men.
    • Comparing Transcutaneous to Serum Bilirubin after Phototherapy in the Outpatient Setting

      Makarova, Natasha; The University of Arizona College of Medicine - Phoenix; McMahon, Shawn (The University of Arizona., 2017-05-10)
      Currently few studies have investigated the accuracy of using transcutaneous bilirubinometry after phototherapy especially in the outpatient setting. The purpose of this study was to evaluate the accuracy of transcutaneous bilirubin measurements (TCB) after phototherapy for neonates with jaundice. At the Maricopa Integrated Health System, neonates who undergo phototherapy for hyperbilirubinemia come in for outpatient follow‐up at the Comprehensive Health Center following their discharge. For those neonates, current protocol calls for serum bilirubin (TSB) to be measured to properly monitor bilirubin levels, however transcutaneous measurements were made and recorded as well. In this study, we compared the values of total serum bilirubin and transcutaneous bilirubin in jaundiced neonates who underwent phototherapy. From October 2013‐April 2015, a total 67 healthy infants were seen in the Pediatric Clinic who had received phototherapy in our hospital, only 36 (54%) of those met minimum data criteria to be included in the study. The absolute difference between mean serum bilirubin and transcutaneous bilirubinometry in healthy outpatient newborns who received inpatient phototherapy was 0.4 and is clinically insignificant. The average time from hospital discharge to return to clinic was 47 hours. We conclude that for the outpatient physician, transcutaneous bilirubinometry can be used following phototherapy, which facilitates faster, more convenient, and painless follow‐up visits.
    • Computed Tomography Perfusion Imaging In Acute Ischemic Stroke: Do The Benefits Outweigh The Costs?

      Willows, Brooke; The University of Arizona College of Medicine - Phoenix; Karis, John (The University of Arizona., 2017-05-25)
      Current stroke imaging protocol at Barrow Neurological Institute calls for a noncontrast computed tomography (NCCT), a computed tomography angiography (CTA), and a computed tomography perfusion (CTP) at the time of presentation to the emergency department (ED), and follow up imaging includes magnetic resonance diffusion weighted imaging (MR‐DWI). This information is used to determine the appropriateness and safety of tissue plasminogen activator (tPA) administration. Previous studies have shown the risk for post‐tPA hemorrhagic conversion rises significantly as the size of the infarct core increases. Thus, it is of great importance to have an accurate method of measuring core infarct size in patients presenting with acute ischemic stroke. The purpose of our study is to determine if CTP correctly identifies the infarct core and if post‐tPA hemorrhagic conversion is related to the size of the infarct core and/or the accuracy of CTP in identifying the infarct core. The ultimate goal is to improve patient outcomes by decreasing the morbidity and mortality associated with tPA administration. This study is a retrospective chart review of all patients who presented to the ED during a one year period with signs and symptoms of acute ischemic stroke who then subsequently received tPA. Imaging was also reviewed, including the NCCT, CTA, CTP, and MRDWI for each patient. In this study, MR‐DWI is used as the gold standard for determining the presence or absence of an infarct core. CTP and MR‐DWI are in agreement of the presence of an infarct core in 7 patients, or 10 percent of the time. Similarly, CTP and MR‐DWI are in agreement of the absence of an infarct core in 31 patients, or 44 percent of the time. In the other 32 patients, CTP and MR‐DWI are in disagreement. The percent correlation between CTP and MR‐DWI was found to be 24 percent with a p‐value < 0.05. As for post‐tPA hemorrhagic conversion, 12 percent of patients had hemorrhagic conversion, and when the hemorrhage rate was compared to the size of the infarct core, the odds of post‐tPA hemorrhagic conversion were 56 times higher in the group of patients with infarct cores larger than one‐third of a vascular territory than in patients with smaller infarct cores with a p‐value < 0.001. Although no significant correlation was found between the accuracy of CTP data and the rate of post‐tPA hemorrhagic conversion, patients with concordant CTP and MR data had a 46% lower likelihood of post‐tPA hemorrhagic conversion than did patients with contradictory CTP and MR‐DWI data. Conclusion: Because patients with infarct cores larger than one‐third of a vascular territory are 56 times more likely to hemorrhage than patients with smaller infarct cores and CTP is less accurate than MR‐DWI in identifying the infarct core in patients presenting with acute ischemic stroke, CTP studies should not be part of the acute stroke imaging protocol. Another imaging modality, such as MR‐DWI, may be preferential in the setting of acute ischemic stroke to identify the infarct core.
    • CT Findings of Pulmonary Hypertension

      Patel, Akash; The University of Arizona College of Medicine - Phoenix; Connell, Mary (The University of Arizona., 2017-05-25)
      Primary pulmonary hypertension (PPH) has an extremely poor prognosis with a mean survival time of 2‐3 years from time of diagnosis. Hemodynamically, PPH is defined with a mPAP of ≥ 25 mm Hg. Currently, RHC is the gold standard for measuring the arterial pressures and diagnosing PPH; however, it is an incredibly invasive procedure. Our study will show whether CT angiography can be considered as a non‐invasive alternative for diagnosing PPH. Studies in the past have shown CT measurements of the MPAD and MPAD/AAD ratio having strong correlations with PPH. In addition to those measurements, we want to show if other CT parameters also have a correlation with PPH. Some of these novel measurements include the interventricular septal deviation and the Elizabeth Taylor sign. The interventricular septum is normally bowing to the right in a non‐pathological state. If it is straight or bowing to the left, this will indicate increased right ventricular pressures which would be indicative of PPH. Straight will indicate increased RV pressures, and bowing to the left will be considered markedly increased RV pressures. The Elizabeth Taylor sign is the ratio of the diameter of the segmental bronchi and its corresponding artery. We will hypothesize that the artery will be much larger than the bronchi in patients with PPH. Other measurements will include the left and right pulmonary arteries. This study is a retrospective review of subjects who underwent an otherwise unremarkable CT pulmonary artery angiogram. Subjects with pulmonary embolism or other acute pulmonary diseases are excluded. For each subject, the following CT findings are obtained: main pulmonary artery diameter (mPAD), ratio of mPAD to ascending aorta, right and left pulmonary artery diameters, ratio of segmental pulmonary artery to corresponding bronchus, and interventricular septal displacement. Straightening of the interventricular septum qualifies as increased right ventricular septal pressure and right‐to‐left bowing of the septum qualifies as a marked increase. Mean pulmonary artery pressure measured on any prior/subsequent RHC or echocardiogram within 3 months of the CT is recorded. Any past medical history of connective tissue disease is noted. Descriptive data are calculated and correlations are done to assess for presence and strength of associations among variables. Data from 484 subjects are collected. Incidence rate of pulmonary hypertension isv13% (n=63). 52% (n=33) of the subjects with pulmonary hypertension are female with an average age of 55 years. mPA diameter (p<0.001), mPA:AA ratio (p<0.001), right (p<0.001) and left pulmonary artery (p=0.004) diameters are predictors of pulmonary hypertension. sPA:B ratio (p=0.08) and interventricular septal displacement (p=0.96) are not predictive of pulmonary hypertension. This study supports an association of mPA diameter, mPA:AA ratio, right and left pulmonary artery diameters with pulmonary hypertension diagnosed by RHC or echocardiogram. Prospective research is warranted to confirm and establish threshold values for each variable. Currently, an invasive RHC remains the most accurate method of diagnosis. Correlating CT findings with pulmonary hypertension would allow clinicians to use CT as a noninvasive screening tool.
    • Does Adjunctive Pain Control with Dexmedetomidine Improve Outcomes in Patients with Adolescent Idiopathic Scoliosis?

      Spaulding, Kole; The University of Arizona College of Medicine - Phoenix; Shrader, M. Wade (The University of Arizona., 2017-05-19)
      Adolescent Idiopathic Scoliosis (AIS) is typically treated surgically by Posterior Spinal Fusion (PSF) surgery. Intravenous analgesics and oral opioids are commonly used for pain management. Several adjunct therapies are used in addition to the standard treatments. One of these therapies is the use of dexmedetomidine (dex). Though dex has been found to be an effective sedative for post‐operative patients, there are also several adverse effects that are associated with its use. The purpose of this study was to investigate the effectiveness and overall benefit of using dex for pain control for patients undergoing PSF for AIS. IRB approval was obtained. A group of 43 patients with AIS undergoing PSF and using Dex for adjunctive pain control were matched with 43 patients who did not use Dex. The groups were matched based on gender, age, height, weight, and level of spinal fusion. During the patients’ post‐operative hospital stay, the total opioid use and clinical pain scores were compared between the two groups using t‐tests, with significance set at p<0.05. Total opiate use was 239.6 morphine equivalent doses in the non‐Dex (control) group and 246.2 in the group that received Dex (p=0.72). The average pain score in the control group was 2.3, and the group that received Dex was 2.6 (p =0.43). There were no differences in the complication rate between the two groups, specifically the oversedation rates and pulmonary complications. Lastly, the average length of stay for the control group was 4.8 days compared to the dex group, which was 5.0 days (p=0.35). Although adjunctive pain modalities may be very useful in the treatment of postoperative pain after PSF in patients with AIS, the use of Dex in this cohort did not improve pain scores, lower opioid use, or lower the LOS. Based on these results, we do not recommend the routine use of dexmedetomidine as an adjunctive pain control modality. Adjunctive modalities are important in pain control in patients with AIS undergoing PSF, but the use of dexmedotomidine was not effective in improving pain control.
    • The Effect of Two Attending Surgeons on Patients with Large Curve Adolescent Idiopathic Scoliosis Undergoing Posterior Spinal Fusion

      Bosch, Liam Christian; The University of Arizona College of Medicine - Phoenix; Shrader, Wade (The University of Arizona., 2017-06-01)
      Surgical correction of Adolescent Idiopathic Scoliosis (AIS) carries a substantial risk of complication. The literature supports improved perioperative outcomes through the two surgeon strategy in other complex orthopedic procedures. Does the presence of 2 versus 1 attending surgeons affect the perioperative morbidity of posterior spinal fusion (PSF) in patients with AIS curves greater than 70°? We reviewed the database from a large regional children’s hospital of all patients with AIS curves greater than 70° who underwent PSF from 2009‐2014 and divided the cohort into single versus 2‐surgeon groups (28 vs. 19 cases, respectively). We analyzed cases for length of surgery, estimated blood loss, and length of stay. The groups were identical when comparing age, gender, spinal levels fused, and average ASA score. However, the average Cobb angle in the single surgeon group was significantly less than in the 2 surgeon group at 78.4 vs 84.0 degrees, respectively (p=0.049). Mean operative time for single versus 2 surgeons was 238 (SD 48) vs 212 (SD 46) minutes (p=0.078). Mean percent estimated blood loss was 26% (SD 14.1) for single surgeon vs 31% (SD 14.9) for 2 surgeons (p=0.236), and mean estimated blood loss for single surgeon vs 2 surgeons was 830ml (SD 361) vs 1045ml (SD 346) (p=0.052). Mean length of stay was significantly decreased in the 2 surgeon group at 5.16 days (SD 1.7) versus the single surgeon group at 6.82 days (SD 6.82) (p=0.002). The use of 2 surgeons in AIS deformity correction at an experienced regional children’s hospital did not improve clinical outcomes. The average length of stay was reduced in the two‐surgeon group, but there was no significant impact on blood loss or operative time. However, this study does not rule out the potential for positive impact with a two‐surgeon strategy, and given previous supportive data in the literature, this approach should further evaluated to determine its effect on improving perioperative outcomes.
    • The Effectiveness of Military Medicine in Counterinsurgency Campaigns

      Ly, Jane; The University of Arizona College of Medicine - Phoenix; Beyda, David (The University of Arizona., 2017-05-10)
      While medical diplomacy has played a large role in US counterinsurgency (COIN) campaigns, few studies have been done to show their effectiveness. This study is a systematic review based on literature published by July 2014, looking at military medicine’s role in Operations Enduring Freedom and Iraqi Freedom (OEF/OIF). Both scientific and military databases were searched and yielded an initial 1,204 papers; however, these were later narrowed down to four articles, mostly restricted by the requirement of structured, scientific methods. These four studies were not well‐powered and focused on such different topics that no real conclusion could be drawn on the topic. In the end, the real value of the study was to show that despite the significant amount of resources poured into these COIN medical operations, very little study has been done to see if they have any effect.
    • Effects of High Vs. Reduced‐Dose Melphalan For Autologous Bone Marrow Transplantation in Multiple Myeloma On Pulmonary Function: A Longitudinal Study

      Nikolich‐Zugich, Tijana; The University of Arizona College of Medicine - Phoenix; Knox, Kenneth (The University of Arizona., 2017-05-12)
      Bone marrow transplants (BMT, also hematopoietic stem cell transplants or HSCT/SCT) are one of the greatest medical achievements of the 20th century. They offer a treatment for a host of malignant and nonmalignant hematopoietic disorders, genetic diseases and solid tumors that could otherwise be fatal. Studies have found that 60% of patients undergoing BMT develop pulmonary complications (PC), and 1/3 of those require intensive care after transplantation. Despite the potential pneumotoxicity of induction agents, to date there have been no longitudinal studies following pulmonary function in this high‐risk patient population. This study reviewed patient who underwent autogeneic bone marrow transplant for multiple myeloma at Banner University Medical Center – Tucson (formerly University of Arizona Health Network) from January 1, 2003 through December 31, 2013. Pretransplant evaluatin and pulmonary function testing data were obtained and stratified between high dose (standard) Melphalan (200 mg/ms2) and reduced dose (140 mg/ms2). Statistically significant differences were present between the 2 groups at baseline for DLCO but disappeared at 6 and 12‐month followup, while a statistically significant difference for FEV1/FVC ratio was seen at baseline and 6 months but disappeared at 12‐month follow‐up. There were no statistically significant differences seen with FEV1 between the two groups. Given there is no difference in mortality and relapse outcomes between the groups, the standard of care dosing for Melphalan is not associated with an increase in pulmonary morbidity.
    • The Efficacy of Maternity Waiting Homes in Decreasing Maternal and Perinatal Mortality in Low-Income Countries – A Systematic Review

      Ekunwe, Akua Boatemaa; The University of Arizona College of Medicine - Phoenix; Coonrod, Dean (The University of Arizona., 2017-05-23)
      Maternal and perinatal mortality remains significantly high in low‐income countries with over 800 deaths per day of women around childbirth. Greater than 90% of such deaths occur in low‐income countries. The concept of maternity waiting homes (MWH) was reintroduced to aid in decreasing maternal and perinatal mortality. Since the previous Cochrane Review in 2012 on maternity waiting homes, there have not been any published randomized controlled studies. Do observational studies on MWHs demonstrate decreased maternal and perinatal mortality in low‐income countries when compared with the standard of care? We searched for primary articles that reported maternal and perinatal deaths as major outcomes in studies who compared MWHs to other methods such as direct hospital admits, we also investigated cesarean delivery rates. Search engines used were: Cochrane Review, Medline and CINAHL. Meta‐analyses and forests plots were formulated using MedCalc Software. Systematic review was drafted using MOOSE guidelines for meta‐analysis and systematic reviews of observation. Seven articles met criteria for this study. The maternal mortality rate for MWH was 105/100,000 and 1,066/100,000 for non‐MWH, Relative Risk (RR) 0.145 (95% Confidence Interval (CI) 0.062 to 0.204). Perinatal mortality rate was 60/1,000 in MWH compared to 65/1,000, RR 0.782 (CI 0.602 to 1.120) in non‐MWH. Stillbirth rate was 18/1,000 in MWH and 184/1,000 in non‐MWH, RR 0.204 (CI 63.88 to 94.08). Neonatal mortality rates were 16/1,000 in MWH and 15/1,000 in non‐MWH, RR 0.862 (CI 0.392 to 1.628). Cesarean deliveries rate was 24/100 for MWH and 18/100 in non‐MWH, RR 1.229 (CI 1.226‐1.555). MHWs statistically decreased maternal death, stillbirths and increased cesarean delivery rates. Overall, the observation nature of the study designs introduces selection biases that may have altered the results of the studies. No randomized trials have been done to date. We suggest cluster‐randomized studies to further evaluate the effect of MWHs.
    • Evaluation of an Opt-Out HIV Screening Program in the Maricopa County Jails

      Nelson, Erin Da‐Hye; The University of Arizona College of Medicine - Phoenix; Taylor, Melanie; Mullany, Charles (The University of Arizona., 2017-05-12)
      Since inmates are a population disproportionately affected by HIV, correctional settings are important sites for delivering HIV services. The Maricopa county (Phoenix Area) jail system is the 4th largest in the nation. In 2011, the Maricopa County Correctional Health Service implemented an opt‐out HIV screening program for individuals booked into the Maricopa County Jails (MCJ). The aims of this study were to determine for the years 2012‐2014: • The number of inmates screened for HIV • The HIV positivity rate • The number of newly diagnosed patients • The clinical characteristics of the newly diagnosed HIV positive patients Five to seven days after booking, inmates are offered HIV screening. These laboratory records were used to determine the number of inmates tested and positivity. Prior history of previous HIV diagnosis was obtained from Maricopa public health records. Retrospective chart review of the MCJ health and case management records, including Ryan White forms, was performed to gather gender, age, race/ethnicity, sexual orientation, drug use, homelessness and co‐morbidities of newly HIV‐infected persons, such as Hepatitis C and prior STDs. Categorical factors were compared between groups with the Chi‐square test. Means were compared using a standard t test. P values ≤0.05 were considered significant. A total of 319,575 persons were booked and 46,346 were screened (14.5%) for HIV during the study period. The majority of booked inmates were male (76.9%) and Caucasian (50.8%). The mean age of inmates was 36 years. There were 70 newly HIV‐diagnosed patients. Chi squared and t tests comparing newly diagnosed individuals to the general jail population revealed statistical significance for male gender (p=0.02), African American race (p=0.04), and age (p=0.003). Undiagnosed HIV, including AIDS (CD4 counts <200), is an important issue among individuals booked into the MCJ. Compared to the general jail population, HIV is more likely to be diagnosed in males rather than females, younger patients, and African‐American patients. Additionally, IV drug use, polysubstance abuse, other STDs (particularly syphilis), high risk sexual activity, Hepatitis C and homelessness were common among HIV positive patients. Surveillance should be continued and include more patient education on the importance of screening. Furthermore, targeting high‐risk populations may result in even greater numbers of individuals being diagnosed and treated. Within the next year, all patients at the MCJ will also be offered screening for Hepatitis C, chlamydia, gonorrhea and syphilis. This may also result in more patients agreeing to be screened, and subsequently diagnosed with HIV.
    • Evaluation of Educational Intervention on Concussion Knowledge and Behavior in Student Athletes

      Bedard, Julia; The University of Arizona College of Medicine - Phoenix; Wilson, Kristina (The University of Arizona., 2017-04-20)
      Background and Significance: The purpose of this study was to evaluate the effectiveness of the Barrow Brainbook (BBB) concussion education program as a tool to increase concussion knowledge among Arizona high school athletes and to modify attitudes and behaviors regarding concussion. Methods: This was a cross sectional study of Arizona high school athletes utilizing a 31 question multiple‐choice de‐identified survey. Attitude, knowledge, and behavior questions, as well as sport and level of participation were analyzed using the Wilcoxon Rank Sum test. Means between groups were analyzed using a two‐way ANOVA. Linear regression was used to determine if there was a relationship between number of years since completing BBB and concussion knowledge. Results: Surveys were distributed to 382 student athletes with 363 of those being completed. 224 students participated in BBB (62%). Knowledge and behaviors regarding concussion were not statistically significant when comparing students who had and had not participated in BBB. Those who participated in BBB scored more poorly on questions regarding attitudes about concussion than those who had not (p=0.033). Subsequent two‐way ANOVA testing showed that students who sustained a concussion scored worse (p<0.01) while completing BBB did not significantly affect attitude (p=0.399) when history of a concussion was brought in to the analysis. 90 students (25%) reported sustaining a concussion. Football and varsity level participation were significant for a higher mean number of concussions (p<0.05, p<0.05). There was no relationship between time since taking BBB and concussion knowledge (R2 was 0.007). Conclusions: In this study, there was no evidence to show that participating in the BBB program improved concussion knowledge, attitudes, or behaviors. Number of years since taking BBB was not a good predictor of concussion knowledge. Students who played football and participated at a varsity level were significantly more likely to sustain a concussion. Sustaining a concussion was associated with a higher attitude risk sum score. This is an evaluation of an educational tool specifically designed for adolescents that demonstrated no statistically significant change in increasing knowledge or modifying attitudes and behaviors in a population of high school athletes in Arizona.
    • Extracellular Matrix Biomarkers are Time Dependent and Regional Specific in Experimental Diffuse Brain Injury

      Jenkins, Taylor; The University of Arizona College of Medicine - Phoenix; Lifshitz, Jonathan (The University of Arizona., 2017-05-09)
      The extracellular matrix (ECM) provides structural support for neuronal, glial and vascular components of the brain, and regulates intercellular signaling required for cellular morphogenesis, differentiation and homeostasis through constant remodeling. We hypothesize that the ECM is susceptible to degradation and accumulation of glycoproteins, which serve as biomarkers specific to diffuse brain injury severity and region. Experimental TBI was induced in male Sprague Dawley rats (325‐375g) by midline fluid percussion injury (FPI) at sham (n=6), mild (1.4 atm, n=16) and moderate (2.0 atm, n=16) severity. Tissue from the cortex, hippocampus and thalamus was collected at 15 minutes, 1, 2, 6 and 18 hours post‐injury as well as 1, 3, 7 and 14 days post‐injury. All samples were quantified by western blot for glycoproteins: reelin, fibronectin, laminin, and tenascin‐c. Band intensities were normalized to sham and relative to β‐actin. In the cortex fibronectin decreased significantly at 15 minutes, 1 hour and 2 hours postinjury, while tenacin‐C decreased significantly at 7 and 14 days post‐injury. In the thalamus, reelin decreased significantly at 2 hours, 3 and 14 days post‐injury. In the hippocampus, tenacin‐C increased significantly at 15 minutes and 7 days post‐injury. Changes in levels of these glycoproteins at acute time points suggest that they may be useful diagnostic biomarkers in an emergency room setting. Further investigation into breakdown products and penetrance into blood is needed. The specificity and sensitivity of these biomarkers remain to be validated as clinically useful tools.
    • Fat Bone Ratio: A New Measurement of Obesity

      Brown, Bryant; The University of Arizona College of Medicine - Phoenix; Roh, Albert; August, David (The University of Arizona., 2017-04-24)
      Importance: This study proposed a new radiographic measure of obesity that is a better predictive indicator of obesity‐related risk: Fat/Bone Ratio. Primary Objective: Does the Fat/Bone Ratio correlate with obesity. Secondary Objective: Does the Fat/Bone Ratio correlate more closely with the comorbidities of obesity as compared to BMI. Design: Retrospective review of 2703 upright posterior‐anterior (PA) and lateral chest radiographs obtained from June 2013 through May 2014. The soft tissue height overlying the acromioclavicular joint was calculated and divided by the mid‐clavicle width to determine the Fat/Bone Ratio. Comorbidities of obesity were determined through chart review. Setting: Adult community emergency department. Participants: All adults (age greater than 18). Main Outcomes and Measures: BMI, Fat/Bone Ratio, comorbidities: hypertension, obstructive sleep apnea, osteoarthritis, hyperlipidemia, atherosclerosis, coronary artery disease, cerebrovascular accident, and myocardial infarction. Results: Fat‐to‐Bone ratio and BMI were both significantly associated with hypertension, diabetes, hyperlipidemia, obstructive sleep apnea, and osteoarthritis (P < .05). However, only Fat/Bone Ratio is associated with atherosclerosis (p = 0.02), coronary artery disease (p = 0.001), myocardial infarction (p = 0.002), and peripheral vascular disease (p = 0.01); BMI is not associated with these comorbidities (p = 0.90, 0.42, 0.25, and 0.50, respectively). Conclusions and Relevance: Findings suggest that Fat/Bone Ratio is an improved measure of obesity as compared to BMI.