• Diagnosing Ventilator‐Associated Pneumonia in Burn Patients: Endotracheal Aspirates Versus Bronchoalveolar Lavage

      Lish, James; The University of Arizona College of Medicine - Phoenix; Foster, Kevin N. (The University of Arizona., 2016-03-25)
      Introduction: Ventilator‐associated pneumonia (VAP) is associated with increased mortality, ventilator days, intensive care unit days and length of stay, especially in the thermal burn patient. In addition to poorer patient outcomes it is estimated that VAP increases the cost of care, making the prevention of VAP a high priority within healthcare. While no “gold standard” diagnosis for VAP exists, criteria typically include clinical suspicion, radiography and microbiological testing. The purpose of this study was to correlate results of endotracheal tube swabs (ETT), endotracheal aspirates (TA) and broncheoalveolar lavage (BAL) in burn patients with suspected VAP. The goal of this study is to determine if TA sampling is a viable alternative to BAL in the diagnosis of VAP in burn patients. Methods: This was a non‐interventional prospective study of 42 adult burn patients with suspected VAP. Respiratory specimens via ETT, TA, and BAL were collected and cultured. Basic demographics, clinical signs and symptoms and culture results were collected and descriptive statistics were performed. Results: Concurrent cultures were performed on the 42 patients with suspected VAP. Correlations were done between TA, BAL and ETT. TA and BAL correlated 87% of the time while TA and ETT correlated 49% of the time. The correlation between ETT and BAL was 40%. Calculated sensitivities, specificities, positive predictive values (PPV) and negative predictive values (NPV) for TA and BAL were roughly equal, while the values for ETT were much lower. Conclusions: TA is nearly as reliable as BAL in identifying the causative organisms in VAP, and should be considered as an economical and easily obtained initial diagnostic test in burn patients suspected to have VAP.
    • Effectiveness and Student Perception of Simulated Case Based Learning in a Pre Clinical Medical Education

      Weed, Michael; The University of Arizona College of Medicine - Phoenix; Savi, Christine (The University of Arizona., 2016-03-25)
      Over the past decade, patient simulation has become an important teaching tool in allopathic medical education. Initially, medical simulation was used exclusively in the clinical years of medical training, but implementation into pre‐clinical curriculum is becoming increasingly common. Because simulated teaching experiences are a relatively new practice in pre‐clinical medical education, little is known about their value in this setting. We hypothesize that high‐fidelity patient simulation is an effective method of teaching basic medical sciences during the pre‐clinical years and that it will be viewed favorably by students when compared to other established teaching modalities. The purpose of our study is to: (1) test for an effect of teaching method on test score performance by comparing the results of relevant test items given to two student groups: a simulation group and a traditional case‐based instruction group; (2) determine student perception of simulation as a learning method for basic medical sciences. Methods: A one tier, mixed methods design was used to sequence this study. Test item scores were obtained from the classes of 2015 and 2016 at the University of Arizona College of Medicine ‐ Phoenix and results were analyzed using descriptive statistics to compare means and item difficulty. A Fisher’s exact test was conducted to compare test item performance between students who did and did not use simulation in their case‐based instruction group. Presimulation and post‐simulation surveys were also administered and thematic extraction used to triangulate results to quantitative findings. Results: There was no significant difference between performance of the simulation (n=48) and non‐simulation (n=79) group on the three test items. Survey results from this particular study indicate that students do enjoy learning in the simulated case‐based environment and that they find it to be intellectually stimulating. They also believe simulation will be useful in their careers. They do not, however, believe that it is as effective at teaching basic medical sciences when compared to the traditional lecture hall setting. Students also find simulation learning to be more stressful than small group learning. Conclusion: Our findings suggest that students who learn material through simulated case instruction perform as well as their counterparts who learn the material in traditional small group non‐simulated settings. However, our survey data suggests that while student perception of simulation is positive overall, there are instances in which simulation is viewed less favorably than both small group and traditional lecture environments. When analyzed together, the test item performance and survey findings show that while simulation can be an effective teaching tool in pre‐clinical medical education, there was not a significant difference when compared to lecture hall and non‐ simulated small group learning settings.
    • Effectiveness of Using Texture Analysis in Evaluating Heterogeneity in Breast Tumor and in Predicting Tumor Aggressiveness in Breast Cancer Patients

      Hopp, Alix; The University of Arizona College of Medicine - Phoenix; Korn, Ronald (The University of Arizona., 2016-03-25)
      Objective and Hypothesis We hypothesize that tumor heterogeneity or tissue complexity, as measured by quantitative texture analysis (QTA) on mammogram, is a marker of tumor aggressiveness in breast cancer patients. Methods Tumor heterogeneity was assessed using QTA on digital mammograms of 64 patients with invasive ductal carcinoma (IDC). QTA generates six different values – Mean, standard deviation (SD), mean positive pixel value (MPPV), entropy, kurtosis, and skewness. Tumor aggressiveness was assessed using patients’ Oncotype DX® Recurrence Score (RS), a proven genomic assay score that correlates with the rate of remote breast cancer recurrence. RS and hormonal receptor status ‐ estrogen receptor (ER) and progesterone receptor (PR) ‐ were collected from pathology reports. Data were analyzed using statistical tools including Spearman rank correlation, linear regression, and logistic regression. Results Linear regression analysis showed that QTA parameter, SD, was a good predictor of RS (F=6.89, p=0.0108, R2=0.0870) at SSF=0.4. When PR status was included as a predictor, PR status and QTA parameter Skewness‐Diff, achieved linear model of greater fit (F=15.302, p<0.0001, R2=0.2988) at SSF=1. Among PR+ patients, Skewness‐Diff was a good linear predictor of RS (F=9.36, p=0.0034, R2=0.1320) at SSF=0.8. Logistic regression analysis showed that QTA parameters were good predictors of high risk RS probability, using different cutoffs of 30 and 25 for high risk RS; these QTA parameters were Entropy‐Diff for RS>30 (chi2=10.98, p=0.0009, AUC=0.8424, SE=0.0717) and Mean‐Total for RS>25 (chi2=9.98, p=0.0016, AUC=0.7437, SE=0.0612). When PR status was included, logistic models of higher log‐likelihood chi2 were found with SD‐Diff for RS>30 (chi2=18.69, p=0.0001, AUC=0.9409, SE=0.0322), and with Mean‐Total for RS>25 (chi2=25.56, p<0.0001, AUC=0.8443, SE=0.0591). For PR+ patients, good predictors were SD‐Diff for RS>30 (chi2=6.87, p=0.0087, AUC=0.9212, SE=0.0515), and MPP‐Diff and Skewness‐Diff for RS>25 (chi2=16.17, p=0.0003, AUC=0.9103, SE=0.0482). Significance Quantitative measurement of breast cancer tumor heterogeneity using QTA on digital mammograms may be used as predictors of RS and can potentially allow a non‐invasive and cost‐effective way to quickly assess the likelihood of RS and high risk RS.
    • The Effects of Stigma Toward Mental Illness on Family Physicians

      Sipe, Michelle; The University of Arizona College of Medicine - Phoenix; Goto, Kristine (The University of Arizona., 2016-03-25)
      Many individuals utilize primary care as their main source of mental health care, as in many areas of the US access to specialized psychiatric care does not meet the demand. Prior research has showed that many healthcare practitioners, including those working in generalist fields, carry stigmatized views about individuals with mental illness. Such stigmatized views can result in misattribution of symptoms to mental illness and a decline in proper diagnosis and treatment. Our study aims to examine if stigmatized views about mental illness relate to family medicine physicians’ comfort levels with treating mental illness, patterns of referral to psychiatrists, or amount of continuing medical education on psychiatric issues. Our hypothesis is that family medicine physicians who carry less stigmatized views will be more comfortable and up to date with psychiatric care practices and less likely to refer mental health issues to specialized mental health services. Methods: We administered an email survey to family medicine physicians via the Arizona Academy of Family Physicians monthly electronic newsletter. The survey contained demographic questions, a short (5‐question) validated stigma questionnaire (Attitudes to Mental Illness Questionnaire or AMIQ), and questions regarding self‐stated comfort level with mental illness, amount of recent mental‐health CME, and likelihood of referral for various mental illnesses. Results: AMIQ stigma ratings and referral rates for anxiety were significantly related (p=.012), as were AMIQ stigma ratings and amount of mental health CME (p=.001). Other trends were discovered, but were not significant. Impact: These results further demonstrate the need for increased emphasis on psychosocial and psychiatric issues, particularly stigma reduction, in family medicine residency training and CME. If family medicine physicians with high levels of stigma are less likely to treat mentally ill patients or seek further education regarding psychiatric issues, it could disrupt their patients’ quality, cost, and continuity of care.
    • Evaluating CNS Lesions in HIV Patients: A Radiologic/Pathologic Review

      Hunter, Camille; The University of Arizona College of Medicine - Phoenix; Gridley, Daniel G.; Van Tassel, Dane; Fairbourn, Phil (The University of Arizona., 2016-03-25)
      Background and Significance. HIV/AIDS is a commonly encountered disease process in many cities and medical centers throughout the world. Approximately 35 million people live with HIV/AIDS worldwide, many of whom develop pathology of the central nervous system (CNS). Many HIV/AIDS patients undergo substantial morbidity and mortality with the development of CNS abnormalities including toxoplasmosis encephalitis (TE), progressive multifocal leukoencephalopathy (PML), primary central nervous system lymphoma (PCNSL), and other opportunistic infections. Especially in these immunocompromised patients, early accurate diagnosis can affect patient management, which is vital to patient survival. Research Question. We hypothesized that fellowship‐trained neuroradiologists are more accurate than general radiologists in the diagnosis of HIV related CNS lesions. Methods. Following institutional IRB approval, we retrospectively analyzed patients with known HIV infection who underwent radiologic imaging and subsequent biopsy of an identified neuropathologic lesion(s) at Maricopa Medical Center between January 2007 and January 2015. Diagnostic scan reports were analyzed to determine whether or not the correct diagnosis was provided in the impression, and rates of correct diagnosis were compared between fellowship trained neuroradiologists and a general radiologists. Results. Thirty‐three patients received neurologic imaging with MRI for a pathologically proven HIV/AIDS related illness with 78 total lesions identified. The correct diagnosis was mentioned in 79% (15/19) of cases read by a neuroradiologist, but only 43% (6/14) of cases read by a general radiologist. Overall, the correct diagnosis was mentioned in the initial impression in 21 of 33 (64%) cases. Chi‐squared analysis showed a statistically significant relationship in the number of mentioned correct diagnoses by neuroradiologists versus general radiologists (p=0.033). Conclusions. Our study suggests that the availability and utilization of specialty fellowship trained staff in radiology is an essential part of accurate early diagnosis. Taking an active role in the work up and diagnosis of specialized disease processes is essential for successful and comprehensive care, especially in our local community where HIV/AIDS support and treatment is on the cutting‐edge.
    • Evaluation of Skin Cancer Screenings in Tucson, Arizona from 2006‐2013

      Romano, Gianna; The University of Arizona College of Medicine - Phoenix; Harris, Robin (The University of Arizona., 2016-03-25)
      Background: One out of every three cancer diagnoses is a skin cancer, and the incidence of both melanoma and non‐melanoma type skin cancers is increasing. Skin cancers, including melanoma, are typically treatable if detected early. However, there is insufficient evidence to support recommendations to establish population based skin cancer screening programs. The specific aims of this study are 1) to evaluate characteristics of participants who attend a community skin cancer screening event and who are referred for follow up due to suspicious lesions, 2) to determine the proportion of participants with suspicious lesions identified at a community skin cancer screening event who complied with a request to visit a dermatologist or primary care physician, and 3) to evaluate attitudes toward sun protection practices, and perceived risk of developing skin cancer among participants who attend a community skin cancer screening and have a suspicious skin lesion. Methods: The Skin Cancer Institute sponsored a series of community skin cancer screening events in Tucson, Arizona from 2006 to 2013. Participants completed an American Academy of Dermatology screening form prior to a skin examination by a dermatologist. Participants with suspicious lesions identified at the examination who agreed to be contacted again received questionnaires 4 months after the initial screening to assess compliance with follow‐up recommendations, and their sun protection practices and risk perceptions. Results: 1979 community members attended the skin cancer screenings. The majority of the participants were Caucasian, females, had blue eyes and brown hair, were college educated, had no prior personal or family history of skin cancer, had health insurance but did not have a regular dermatologist, reported that they had never been to a skin cancer screening before, and stated that without this screening that they would not have their skin examined. 748 (37.8%) of community members were referred and instructed to see a dermatologist for further evaluation of a skin lesion. Of the 441 participants with a suspicious lesion who consented to participate in the follow‐up study, 120 returned a questionnaire; 90 (75%) reported that they followed up with a dermatologist or physician, and 30 (25%) did not. Of the 90 participants who followed up, 53% received a skin biopsy. The self reported diagnoses from the biopsies of the suspicious skin lesions were the following: 1% atypical or dysplastic nevus, 21% actinic keratosis, 16% basal cell carcinoma, 8% squamous cell carcinoma, 2% melanoma, and 38% did not have skin cancer. Conclusions/Impact: This study demonstrated that 38% of community skin cancer screening participants were referred for follow up due to a suspicious skin lesion being identified during a skin cancer screening event. It also appeared that 75% of those who responded to the follow‐up questionnaire complied with the request within four months, although the response rate for the follow‐up questionnaire was low. Therefore, implementing a formal reminder system following the skin cancer screenings may increase the percentage of participants who follow up with a primary care physician or dermatologist after the screening for further evaluation of their suspicious skin lesion.
    • Expedited Partner Therapy, Addressing Increased STD Infection Rates in Arizona

      Wade, Laura; The University of Arizona College of Medicine - Phoenix; Manriquez, Maria (The University of Arizona., 2016-03-25)
      Introduction: Chlamydia and gonorrhea are the two most reported sexually transmitted diseases (STDs) in Maricopa County.1 Effective treatment of the sex partner(s) of patients diagnosed with these STDs is an important step in preventing repeated infections. Expedited partner therapy (EPT) is the practice of prescribing antibiotics to the sex partner(s) of a patient diagnosed with a STD. EPT is recommended by the CDC in cases of uncomplicated chlamydia or gonorrhea infection.2 On September 26, 2008, Arizona statue was revised to allow for the use of EPT.3 Our study seeks to determine whether the use of EPT results in fewer repeat infections of chlamydia or gonorrhea within six months of initial diagnosis. Methods: We performed a retrospective chart review of 200 female patients diagnosed with chlamydia or gonorrhea between 2010 and 2013. We recorded how partner treatment was addressed, whether or not the patient had a repeat infection within six months, provider specialty and additional demographic information. Data was analyzed using One‐Way ANOVA or Wilcoxon Rank‐Sum for continuous variables and Chi‐Squared or Fisher’s Exact was used for categorical variables. Results: Overall documented percent repeat infection of 14.7% (n=20) out of 136 patients with follow up testing within 6 months. Loss to follow up of 32% (n=64). Percent repeat infection in EPT 0.0% (n=0), partner referral 16.1% (n=9), partner notification 20.9% (n=9) and not documented 16.7% (n=2). When comparing percent repeat infection in EPT (0.0%) to all other treatments combined (14.7%) the difference is statistically significant with p=0.025. Conclusions: The use of EPT results in fewer repeat infections in patients diagnosed with chlamydia. Limitations include loss to follow up and incomplete documentation in the electronic health record. Further investigation into the barriers to EPT is warranted to increase utilization of this strategy for partner treatment.
    • Factors Affecting Follow‐Up Care in Hodgkin’s Lymphoma Survivors

      Baker, Devon; The University of Arizona College of Medicine - Phoenix; Flood, Timothy (The University of Arizona., 2016-03-23)
      As research into the treatment of cancers improves patient’s chances for survival, the number of cancer survivors continues to increase. These patients are often treated with chemotherapy and radiation regimens that can increase their risk for cancers and other complications such as heart disease later on. Patients with Hodgkin’s lymphoma tend to be younger than patients with other cancers. Current treatment regimens lead to cures in many Hodgkin’s lymphoma patients with many long term survivors. However, these treatments place survivors at risk for numerous complications, most importantly other cancers and heart disease. Organizations such as the American Cancer Society recommend regular screening and surveillance by a patient’s doctor to detect these potential complications. To assess the factors that affect a patient’s follow‐up care we sent a survey to 365 Hodgkin’s Lymphoma survivors in Arizona and asked them about their specific follow‐up care. The survivors were identified using the Arizona Cancer registry, and 49 (13.4%) responded to our survey. However, of the 365 letter invitations that were sent out, 118 were returned undeliverable leading to a corrected response rate of 19.8%. Of the respondents 93% reported they were getting follow up care. We also looked at patient satisfaction with their care as a second outcome, 34 (72.3%) of the patients stated that they were strongly satisfied with their follow‐ up care. In order to assess physician‐patient communication, we asked patients if they had received a written follow‐up care plan. Of the respondents to this question, 14 (29.7%) noted that they had received a written follow up care plan. These two outcomes were stratified to various demographic factors (age, gender, education status, etc.) to determine if any of these caused a statistically significant difference in a patient’s satisfaction or whether or not they had received a written follow‐up plan. Due to the low number of responders, no statistically significant difference was found. Future studies are needed to further determine whether or not these sorts of demographic factors play a significant role but we believe studies like this are important as cancer survivorship continues to increase.
    • Factors Associated with Allogenic Blood Transfusion After Reconstructive Hip Surgery in Patients with Cerebral Palsy

      Arthur, Jaymeson; The University of Arizona College of Medicine - Phoenix (The University of Arizona., 2016-03-23)
      Background: The hip joint tends to be highly affected in patients with Cerebral Palsy (CP). Subluxation, problems with ambulation, posture, perineal hygiene, and pain can result. Severe cases often require corrective surgery of the affected dysplastic hip(s). This often is accomplished with varus derotational osteotomy (VDRO), femoral osteotomy, pelvic osteotomy, tendon releases/lengthening, or a combination of any of these procedures. These reconstructive hip surgeries can result in marked blood loss. Due to the highly vascularized nature of bone, surgery can result in marked blood loss. This increases the transfusion burden on the patient and increases exposure to blood products and the associated risks therein. By identifying the risk factors that contribute to intraoperative and postoperative blood loss, targeted strategies may be developed to reduce this risk to the patient. Aims: The purpose of this study is to provide descriptive analysis of the pediatric CP population undergoing corrective hip surgery. We will attempt to identify various risk factors that may predispose patients to significant blood loss during reconstructive hip surgery. This study will be the largest study analyzing blood management therapy with the VDRO procedure. Methods: This is a retrospective chart review of consecutive CP patients who have undergone reconstructive hip surgery at a single institution from 2000 to 2012. Demographic data to be analyzed includes patient age, gender, race/ethnicity, height, weight, BMI, and medical comorbidities. Also, type of procedure performed, bilateral vs unilateral reconstruction, specific diagnosis, preoperative hemoglobin and hematocrit (H and H), pre‐transfusion H and H, estimated blood loss (EBL), total operative time, cell saver volume, units transfused, complications, quantity of postoperative transfusion, and post‐transfusion H and H was recorded. Data was compared using the Chi‐squared method, or non‐parametric analog, to assess the likelihood of the need for postop transfusions as an initial univariate analyses. Results: 87 patients were included in the study. There was no significant relationship between the use of autologous blood and age, gender, weight, height, or BMI. Patients who received autologous blood also had a higher EBL (p=0.029) and were more likely to need allogenic transfusion (p=0.023). Concomitant DEGA procedure carried a 2.25 times relative risk of needing blood transfusion (p<0.001, 95% CI 1.402‐3.611). Bilateral VDRO was 1.64 times more likely to need a transfusion, however this was not quite statistically significant (p=0.052, 95% CI 0.972‐2.756) Conclusion: Varus derotational osteotomy for the correction of neuromuscular hip dysplasia can be associated with excessive blood loss, especially in the CP patient population. The use of autologous vs allogenic blood products carries various risks and benefits. This paper has identified that the need of concomitant DEGA osteotomy is correlated with increased blood loss. Also, the use of autologous blood product is correlated with increased blood loss.
    • Genomic Heterogeneity of Glioblastoma: A Comparison of the Enhancing Tumor Core and the Brain Around the Tumor

      Barbee, Bonnie; The University of Arizona College of Medicine - Phoenix; Tran, Nhan (The University of Arizona., 2016-03-23)
    • Healthcare Access among Adults with Frequent Mental Distress

      Khan, Khalid Salim; The University of Arizona College of Medicine - Phoenix; Hussaini, Khaleel; Rahman, Shakaib; Shennib, Hani (The University of Arizona., 2016-05-04)
      Objective: Mental health plays a central role in the well‐being of individuals. Understanding the factors that influence mental wellness is critical in order to develop effective policy that addresses the burden of mental illness in society. The objective of this study is to identify a possible relationship between healthcare access and the presence of mental distress in individuals. Methods: Logistic regression was performed using cross sectional data from a CDC developed nationwide behavioral health surveillance program (BRFSS, 2013‐4). Odds ratios were estimated using frequent mental distress as the outcome of interest while adjusting for confounding variables such as smoking, binge drinking, obesity, etc. Six models were estimated utilizing our hypothesized variables of interest. Results: The calculated adjusted odds ratios (AOR) and confidence intervals (CI) demonstrated a positive correlation between certain variables measuring access to healthcare and the reporting of frequent mental distress, agreeing with the hypothesis. Those variables were financial cost preventing access to medical care (AOR [2], CI [1.9‐2.1]) as well as a span of more than 2 years having elapsed since a routine medical checkup by a healthcare provider (AOR [1.1], CI [1.1‐ 1.2]). The opposite effect was demonstrated in individuals who had no insurance coverage (AOR [.8], CI [.7‐.9]), which was contrary to the hypothesis. Conclusion: After adjusting for confounding variables, a strong relationship exists between individuals who are not able to see a physician due to cost, and the presence of frequent mental distress. Frequent mental distress is also increased in individuals who have not had a routine medical checkup with a physician in the last 2 years.
    • Healthcare Worker Perceptions and Practices Regarding Influenza Vaccination

      Klassen, Aaron; The University of Arizona College of Medicine - Phoenix; Berisha, Vjollca (The University of Arizona., 2016-03-25)
      Background: Rates of influenza vaccination among healthcare workers (HCWs) are low despite the significant morbidity and mortality benefit to the HCWs, their patients and their families. Objective: To examine whether attitudes, perceptions and beliefs of HCWs about influenza and influenza vaccination affect their uptake of the seasonal influenza vaccine. Methods: Telephone interviews were conducted of HCWs during March 2011 to assess seasonal influenza vaccine uptake, attitudes regarding influenza vaccination, and perceptions of risk of influenza infection. Results: Telephone surveys were completed by 1,171 HCWs and of these 903 responded to all questions relevant to this analysis. Logistic regression models of rates for current, 2010-2011,i influenza vaccination season and preceding influenza vaccination seasons were performed. statistically significant (P<0.05) positive odds ratios for vaccination were found among providers, HCWs with more experience, those who favor mandatory workplace vaccination, believing that the vaccine protects family members, believing the average person is somewhat r very likely to be infected with influenza in a given year, not believing that the influenza vaccine will cause illness, and claiming a higher likelihood of vaccination if the vaccine were less costly or free. Of these, the strongest modifiable predictors of seasonal influenza vaccination uptake were a belief that the vaccine provides protection to the HCWs’ family members and a belief that the average person is somewhat or very likely to be infected with influenza in a given year. Conclusion Beliefs about influenza vaccination have significant effects on HCW seasonal influenza vaccine uptake. We recommend targeting these beliefs when designing educational programs for HCW regarding influenza vaccination. Conclusion: Beliefs about influenza vaccination have significant effects on HCW seasonal influenza vaccine uptake. We recommend targeting these beliefs when designing educational programs for HCW regarding influenza vaccination.
    • Identifying an Oxygenation Index Threshold for Increased Mortality in Acute Respiratory Failure

      Hammond, Brandon; The University of Arizona College of Medicine - Phoenix; Dalton, Heidi; Willis, Brigham (The University of Arizona., 2016-03-25)
      Objectives: To examine current oxygenation index (OI) data and outcomes using EMR data to identify a specific OI values associated with outcome. Methods: Retrospective review of electronic medical record (EMR) data for patients age 1 month ‐ 20 years mechanically ventilated for >24 hours in the PICU. Serial, average and maximum OI values were calculated. Length of mechanical ventilation, hospital stay and outcome were assessed. Results: OI was calculated on 65 patients from EMR data, of which 6 died (9.2%). The median maximum OI was 10 for all patients, 17 for non‐survivors (NS), and 8 for survivors (S), (p=0.14 via Wilcoxon rank‐sum test). Odds ratios (OR) indicated 2.1 times increase odds of death (p=.08), 95% confidence interval (0.89–5.03) for each one‐percent increase in maximum OI. Average OI OR also revealed 2.1 times increase in odds of death (p=.14), 95% confidence interval (0.77–5.48). ROC analysis indicated a higher discriminate ability for max OI (AUC = 0.68) than average OI (AUC = .58). OI cut points for mortality were established. Mortality was unchanged until max OI >17, for which mortality nearly tripled at a value of 18% versus 6‐7% for range 0‐17. Conclusions: Serial assessment of OI values may allow creation of alert values for increased mortality risk and aid in development of clinical decision rules. Consideration for escalation of therapies for respiratory failure such as high frequency ventilation or ECMO at lower levels of OI than historically reported may be warranted. This study also helps to validate prior reports that OI is useful as a severity score for clinical research and outcome prediction.
    • Implementation of a Pediatric Stroke Team: Outcomes and Resource Utilization

      Esque, Jacquelin; The University of Arizona College of Medicine - Phoenix; Buttram, Sandra (The University of Arizona., 2016-04)
      Background and Significance: Pediatric stroke is associated with significant morbidity and mortality. In an effort to improve diagnosis and patient management, we established a Pediatric Stroke Team (PST) available 24/7 in January 2012. Outcomes of patients before and after PST implementation are reported. Methods: Retrospective review of pediatric stroke patients (Jan 2009 ‐Dec 2012) at Phoenix Children’s Hospital in Phoenix, Arizona. Primary outcomes assessed were Glasgow Outcome Scale‐Extended Pediatric Revision (GOS‐E Peds) and discharge disposition. Hospital length of stay, time to neuroimaging, stroke therapies, and adherence to neuroprotective strategies (sodium, glucose, temperature) were also evaluated. Data were analyzed by Wilcoxon rank sum, Fisher’s exact, and chi‐square. Results: There were 64 patients pre‐PST and 30 post‐PST. Overall, GOS‐E Peds was improved post‐PST 2 [1, 3] vs. pre‐PST 3 [2, 6] (p = 0.004) with no change if deaths were excluded (post‐PST (2 [1, 2]) vs. pre‐PST (2 [1, 6]) (p = 0.030)). Discharge to home was more common in the post‐PST group (p = 0.018). Definitive neuroimaging tended to occur more quickly post‐PST 2h [1, 2.6] compared to pre‐PST 4.7h [1.3, 16.5] (p =0.16). Post‐PST patients appropriately received heparin (23%) more often than pre‐PST (6%) (p = 0.034) and had fewer episodes of hyperglycemia (3%) compared to pre‐PST (20%) (p = 0.033). There were no differences in episodes of fever, hyponatremia or hypoglycemia. Conclusions: Availability of a PST improved patient care and outcomes. Time to definitive neuroimaging was decreased, appropriate therapies were administered and adverse events (hyperglycemia) were decreased with PST management. We continue to strive for improved care of pediatric stroke patients.
    • Improving Colorectal Cancer Screening Rates in an Urban Community Health Center

      Seelbaugh, Joseph; The University of Arizona College of Medicine - Phoenix; Brite, Kathleen (The University of Arizona., 2016-03-25)
      Colorectal cancer (CRC) is a leading cause of cancer‐related deaths. Although screening has been shown to significantly reduce mortality associated with the disease, CRC screening rates remain low, especially among many minority groups. The purpose of this study was to determine whether an organized screening regimen improves screening in a community clinic serving patients with low baseline CRC screening rates. The study was conducted at the Wesley Health Center, a Federally Qualified Health Clinic (FQHC) that serves a predominantly uninsured patient population. Participants were patients aged 50 – 75 years who visited the clinic for routine primary care. A team of clinicians and support staff at the Wesley Health Center developed a systematic CRC screening protocol with interventions tailored for the clinic. Following the implementation of the screening regimen, screening rates among the targeted population were examined over a one‐year period and compared to a recent one‐year period previous to protocol implementation. The primary outcome was the change in CRC screening rates in the intervention group compared to screening rates prior to implementation of the protocol. Results of the study showed CRC screening rates of 45.6% over the trial period, as compared to 13.7% prior to screening interventions, a statistically significant difference (p < 0.001). The investigation provides valuable information regarding the use of practical strategies to increase CRC screening in community health care settings.
    • Inferior Vena Cava Filter Fracture and Migration to the Heart: A Review of the Literature and Case Report

      Bowles, Brad; The University of Arizona College of Medicine - Phoenix; Shennib, Hani (The University of Arizona., 2016-04-01)
      Background and Significance: The utilization of IVC filters for pulmonary embolism prevention has increased significantly over the past decade as the indications continue to expand. Although the risks associated with IVC filters are small, a well‐known complication is filter fracture and subsequent embolization of the fragment. Case reports have been published on the devastating effects of fragment migration to the heart, causing intense chest pain, pericardial effusion, cardiac tamponade and death. Research Question: There is a paucity of experience and guidelines for treating patients with a metallic foreign object lodged within the heart. Is there a consensus on the proper management of these cases? How do these patients present and what are the outcomes of treatment? Some clinicians have chosen to observe and monitor, while others have gone to the operating room for open‐heart surgery and retrieval of the fragment. Methods: In an attempt to answer these questions, a systematic review of the published literature was conducted between 1985 and 2015. Only articles related to IVC filter fracture and subsequent fragment migration to the heart were included. The clinical presentation, workup, management, treatment and outcomes were collected as available. Results: A total of 23 articles were published consisting of a prospective study, retrospective series and case reports. There were 37 migrated fragment to the heart reported in 29 patients. The most common clinical presentations were chest pain (69.0%) and no symptoms (27.6%). Regarding treatment, ten patients underwent observation, three had successful endovascular retrieval, 12 went to the operating room for open‐heart surgery and four cases were unreported. Of the 12 patients with reported pericardial effusion, 11 (91.7%) underwent open surgical repair. Of the eight asymptomatic patients, seven (87.5%) were ultimately in observation and the management of the other was unreported. Conclusions: There appears to be a consensus in the literature that observation and close follow up are appropriate options for asymptomatic patients. Symptomatic patients with pericardial effusion may benefit from open‐heart surgery. Cardiovascular compromise such as cardiac tamponade should be managed with open surgery. Based upon these findings and other details in the cases, we have proposed a management algorithm.
    • Initial Generalist Versus Subspecialist Provider for Fertility Treatment, Use of In Vitro Fertiliation, and Time to Pregnancy

      Boltz, Mandy; The University of Arizona College of Medicine - Phoenix; Stanford, Joseph B. (The University of Arizona., 2016-03-23)
      Background and Significance: Infertility is a common problem in the United States. Infertility is recognized as a disease by the World Health Organization (WHO) and is identified as an emerging public health priority by the Centers for Disease Control (CDC). It is also associated with numerous effects on women’s physical and emotional health, and involves treatment methods that are medically invasive and associated with health implications for the resulting children. A better understanding of the role of generalist providers in the management of infertility may lead to opportunities to promote a balanced approach to infertility. Research Questions: 1) With what types of providers do most women initiate infertility care? 2) How do women who enter care with a generalist provider differ from those who enter care with a fertility subspecialist? 3) Are different outcomes associated with presenting first to a generalist provider versus a fertility subspecialist? Methods: We analyzed mixed-mode questionnaire data from 279 Utah women with primary infertility enrolled through population-based sampling. We compared women presenting first to generalist providers with women presenting first to fertility subspecialists, with the main outcomes of receiving in vitro fertilization (IVF), time to pregnancy, and live birth. Results: The first point of contact for most women (84%) with infertility was a generalist provider. Only 5% of women sought care initially from a fertility subspecialist, and these women were more likely to have higher incomes, be older, and have been trying to conceive for longer periods of time before seeking care. Women who presented first to a generalist provider were less likely to receive IVF (aOR 0.17; 95% CI: 0.05, 0.57), were equally likely to achieve a pregnancy, and had similar times to pregnancy (aHR 0.80; 95% CI: 0.38, 1.69) as women who presented first to a subspecialist, after controlling for age, time attempting to conceive before seeking care, and income. Conclusion: In this population-based sample of women with primary infertility, presenting first to a generalist was associated with a decreased likelihood of receiving IVF and a similar time to pregnancy. Generalist providers are frequently the first point of care for women with difficulty conceiving and are uniquely positioned to promote a balanced management of infertility.
    • Injections of Bone Marrow Aspirate Concentrate as Treatment for Discogenic Pain

      Shillington, Jon Mark; The University of Arizona College of Medicine - Phoenix; Wolff, Michael (The University of Arizona., 2016-04-20)
      Low back pain (LBP) is one of the most common musculoskeletal pain complaints, affecting up to 84% of the U.S. adult population. In the United States, the highest rate of incidence is between the ages of 45 and 64 years. The causes for LBP are complex and of multiple origins, but one of the primary causes is mechanical low back pain that is discogenic in etiology. This can be secondary to either internal disc disruption (IDD) and/or degeneration of the intervertebral disc (IVD), also known as degenerative disc disease (DDD) [10,11]. Combined physical and medical therapies are successful in relieving pain in approximately 90% of cases of low back pain. However, the remaining 10% become chronic and generate a serious public health problem, known as chronic low--‐back pain (CLBP). CLBP decreases both the quality of life and the labor capacity of the patient. As specific diagnostic procedures for LBP have improved, discogenic pain has been identified as the primary cause of CLBP amongst adults. Within the classification of discogenic pain, the most common specific cause of pain – up to 42% of LBP complaints – is internal disc disruption (IDD), with other distinguishable causes including disc herniation, degenerative disc disease (DDD), and instability of the lumbar segment [10]. Effective treatment for discogenic LBP – and therefore for CLBP – would provide significant relief for individuals as well as for the overall health care system and the employers affected by the patients’ condition. One promising treatment option involves the use of Mesenchymal Stem Cells (MSC), which may allow for regeneration of the disc itself. Treatment with MSCs via injections derived from autologous concentrated Bone Marrow Aspirate (cBMA) would capitalize on the regenerative potential of MSCs while reducing the risk of infection or rejection, both significant risks of treatment from a heterologous source. This project analyzed data collected from 33 patients with confirmed discogenic LBP, who were treated with intradiscal injections of autologous concentrated Bone Marrow Aspirate. After initial treatment, patients were monitored through follow up visits and questionnaires (VAS, Oswestry, SF--‐36) to determine the efficacy of treatment. The areas of interest for this study were intentionally narrow. This study sought to identify specifically the patients’ self--‐reported pain and functioning levels from 2 weeks post--‐treatment to 12 months post--‐treatment. Those reports were gathered using the aforementioned instruments and synthesized to show overall trends and statistically significant changes in the patients’ self--‐ assessment. The patients were also asked to give an overall impression of whether or not their back pain had improved post treatment. While admittedly limited in authority compared to a double--‐blind, randomized, controlled trial, the information was gathered from the patients with the hopes of augmenting ongoing research related to innovative treatments for discogenic LBP and of identifying new areas for further, future research.
    • The Interaction of β-catenin, Vitamin D, Resveratrol, and Two Common VDR Polymorphic Variants in Colorectal Carcinogenesis

      Van Pelt, Chad; The University of Arizona College of Medicine - Phoenix; Jurutka, Peter (The University of Arizona., 2016-04-20)
      Vitamin D and resveratrol have been widely researched in recent years, especially their apparent abilities to impact a host of physiological processes. Resveratrol, a phytoalexin found in various berries, peanuts, and other vegetables, is purported to possess anti-aging, anti-inflammatory, antioxidant, anticancer, neuroprotective, and antiarthritic properties, while the classical endocrine functions of vitamin D are the control of calcium and phosphate homeostasis. The biologically active metabolite of vitamin D, 1,25-dihydroxyvitamin D (1,25D), is typically synthesized in the kidney, bound to vitamin D binding protein, and shuttled to cellular target sites. Mounting data on the effect of locally synthesized 1,25D in immune, epithelial, neural, and other tissues have led to an increased awareness of the myriad functions of vitamin D, including detoxification, cellular aging and its modulation, immune regulation, neurotransmitter activity, and metabolic control. Both endocrine and intracrine actions of vitamin D are mediated by the vitamin D receptor (VDR), a nuclear receptor that controls vitamin D-directed transcription of target genes. Importantly, the VDR has numerous polymorphisms, one of which results in two phenotypically distinct isoforms, designated M1 and M4. VDR M4 is postulated to be more active than M1 in vitamin D-dependent transactivation. Variable binding affinities between the two isoforms and VDR interacting proteins such as TFIIB and RXR have also been observed. Another protein known to interact with VDR is β-catenin, the mediator of the Wnt/β-catenin signaling pathway that can drive colorectal carcinogenesis. The goal of this study was to investigate the ability of vitamin D and resveratrol to regulate the Wnt/β-catenin system via stimulation of β-catenin-VDR (both M1 and M4) interaction and subsequent inhibition of β-catenin-mediated transcription. The current data reported herein support and extend previous work by demonstrating that VDR binds directly to β-catenin and that both vitamin D and resveratrol appear to enhance this interaction. We also present data that 1,25D-stimulated VDR is capable of inhibiting β-catenin transcriptional activity. Significantly, we have shown that the two common VDR polymorphisms M1 and M4, are functionally variable, both in their induction of vitamin D-dependent genes and in their inhibition of β-catenin-mediated transcription. VDR M4 exhibits both elevated transactivation and amplified capacity for β-catenin suppression compared to M1, and studies employing site-directed mutagenesis of VDR implicate the glutamic acid at position 2 as being responsible for the reduced activity of the M1 variant. Both polymorphic VDR variants display 1,25D-mediated enhancement of -catenin association, with the M1 SNP possessing a lower basal (-1,25D) binding to this protein partner but a higher fold stimulation in -catenin interaction in the presence of 1,25D. Taken together, these data support the notions that VDR influences pathways important for colorectal carcinoma (CRC) development, and that supplementation with vitamin D and resveratrol may reduce colon cancer risk in the general population, especially in individuals with the less active M1 VDR polymorphism. A comprehensive understanding of 1,25D and resveratrol action in VDR signaling may allow for a more personalized approach toward treating vitamin D–related disorders and evaluating risk for carcinogenesis.
    • Investigating the Effects of a Pre‐ and Post‐Discharge Intervention on Access to Care and 30‐Day Readmission Rates of CHF Patients at the Phoenix VA Medical Center

      Ahmad, Shahjehan; The University of Arizona College of Medicine - Phoenix; Urbine, Terry; Dev, Sandesh (The University of Arizona., 2016-03-23)
      Significance: Cardiovascular disease represents the single most costly and common cause of hospitalizations in the US. More alarmingly, congestive heart failure (CHF) represents the largest cause of preventable hospitalizations. The 30‐day readmission rate after a hospitalization for CHF is an increasingly important measure of quality in the management of this chronic condition. Interventions targeted at CHF patients after discharge should address access to care and early follow‐up, and should be investigated as a means of decreasing 30‐day readmissions and improving patient outcomes. The Phoenix VAMC created a new early‐follow up clinic in 2011, and this study was the first investigation of outcomes from the intervention. Methods: Patients were selected who were admitted to the Phoenix VAMC with a primary diagnosis of heart failure (ICD‐9: 428.x). Patients referred to the early follow‐up clinic by their primary medicine team were our cohort of interest; patients which underwent standard care and served as our controls. A retrospective chart review was done to assess health status, compliance with the intervention, and 30‐day outcomes, the patient level outcomes. We also compared patients in two time periods, before and after the intervention was implemented, the hospital level outcome. Statistical analysis of this cohort study was done by identifying the relative risk of readmission and death. The RE‐AIM framework was used to determine the hospital‐level impact of the intervention. Results: 275 patients were divided into 116 control patients and 159 intervention patients. The RR of readmission in those referred to the clinic was 1.57 (p=0.09), and mortality was 0.78 (p=0.05). In those patients who were discharged in the post‐intervention time period, the RR of readmission was 0.57 (p=0.036) and 30d mortality was 0.72 (p=0.015). Time to follow‐up was reduced from 15 to 9 days (p<0.01) from the early time period to the late one. Conclusions: The use of care transition interventions have the potential to address issues of rehospitalization, especially in chronic diseases. Establishing a model which improves patient outcomes will have many long‐term benefits for our healthcare system. This intervention decreased mortality and increased readmissions on a patient level, while decreasing both mortality and readmissions on a hospital level, though other factors may be involved.