• End-of-Life Care in American Indian Populations of the Southwest

      Law, Emily; The University of Arizona College of Medicine - Phoenix; Trujillo, Michael (The University of Arizona., 2015-05-13)
      American Indians and American Native (AI/AN) populations have faced health disparities for a period of time. Although their incidence for some chronic diseases such as cancer, may be lower than the general population, they suffer from the poorest survival rates of any ethnic group. As the AI/AN populations age and live longer with chronic disease as seen with the rest of the general population, the discussion of palliative care is becoming more important. Currently, there is not a lot of literature about palliative care that is specific to the AI/AN population. The paucity of research serves as an impetus to learn and examine the need of available palliative care resources for the AI/AN populations. We present the analysis of twenty interviews with staff members of local hospice organizations and hospitals. The interview questions ask participants about their views and experiences in delivering palliative care. Through these discussions, we investigate the current needs, social and cultural barriers, and the infrastructure of how palliative care is accessed and delivered.

      Wadas, Erica; The University of Arizona College of Medicine - Phoenix; Graziano, Kathleen; Nigro, John (The University of Arizona., 2015-04-14)
      Purpose: Infants born with Heterotaxy Syndrome (HS) often have intestinal malrotation in addition to severe congenital heart disease (CHD). Given the catastrophic risk of midgut volvulus, where the vascular supply to the gut is cut off causing necrotic bowel and possible future short‐gut syndrome following surgery, an elective Ladd procedure is recommended at the first diagnosis of malrotation. In patients with severe CHD, however, the risk of complications from prophylactic surgery is high, especially in infancy prior to stable cardiac palliation. This study sought to determine whether deferring a Ladd procedure during the first six months of life in infants with CHD is safe by focusing on the incidence of volvulus in the HS population, morbidity of volvulus and morbidity of an elective Ladd procedure. Methods: Medical records of patients with HS and intestinal malrotation at Phoenix Children’s Hospital from 2006‐2011 were reviewed. Stage of heart surgery, severity of heart disease, diagnosis of intestinal malrotation, and timing of Ladd procedure if applicable were recorded. Results: 31 patients with HS and intestinal malrotation were identified. Of the 31, 9 had a Ladd procedure prior to six months of age, 2 for volvulus and the other 7 either electively or for less severe GI symptoms that were not suggestive of volvulus. The other 22 did not have a Ladd procedure prior to six months of age. There was one death (1/22) from a non‐gastrointestinal cause in a patient who had not undergone a Ladd procedure. There were no deaths in the 9 patients who underwent a Ladd procedure (0/9). Conclusions: Given the low overall incidence of volvulus in HS, and with continued vigilance for obstructive symptoms, this study suggests that delaying the Ladd procedure in asymptomatic patients with HS and CHD and intestinal malrotation is safe. Watchful waiting may reduce the incidence of cardiac complications during the Ladd procedure by allowing for stabilizing cardiac surgical palliation prior to elective abdominal surgery.

      Vanhoy,Steven; The University of Arizona College of Medicine - Phoenix; Hopf, Harriet (The University of Arizona., 2015-04-14)
      Objective: To compare findings of emergency echocardiography (rescue echo) in the intra‐ operative period to findings of rescue echo in the ICU setting. Design: We queried a database of perioperative echo for all rescue echo studies done over a two year period. We compared the frequency of left ventricular (LV) and right ventricular (RV) systolic dysfunction, LV diastolic dysfunction, LV segmental wall motion abnormalities, and hypovolemia of the intraoperative and ICU studies. Results: LV and RV systolic dysfunction were more prevalent in ICU rescue echo studies compared to intra‐op rescue studies (22% vs. 10%, and 34% vs. 13%, respectively, p<0.05 for each). LV diastolic dysfunction was more prevalent in ICU rescue echo studies compared to intra‐op rescue studies (60% vs. 48%, p<0.05). Segmental wall motion abnormalities (SWMA) were more prevalent in the ICU compared to intra‐op setting (38% vs. 19%, p<0.05). Conclusion: In an observational study of real‐world rescue echo, the incidence of LV and RV systolic dysfunction, LV diastolic dysfunction, and LV SWMA were all more common in the ICU compared to the intra‐op studies. This could reflect the differences in patient population, differences in reasons clinicians perform rescue echo in the OR and in the ICU, or the hemodynamic effects of anesthesia.

      Sinha, Natasha; The University of Arizona College of Medicine - Phoenix; Beyda, David (The University of Arizona., 2015-04-14)
      End‐of‐life (EOL) care and decision‐making in pediatrics is a challenging and complex aspect of patient care experienced by residents and physician attendings. Previous studies have evaluated determinants that contribute to physicians’ attitudes towards EOL care as well as preparedness of students and residents in EOL decision‐making. However, the determinants contributing to a physician’s ability to make such decisions and feel confident in addressing EOL issues are dynamic. Recognizing that decision‐making changes over time, identifying when these changes occur may demonstrate the need for educational interventions for medical students and residents early in their career to help prepare them for EOL decision‐making. A longitudinal assessment of changes in attitudes and knowledge of EOL discussions and how they impact EOL decision‐making was not previously evaluated. This preliminary study establishes a baseline for medical student, resident, and attendings for EOL decision‐making and those factors that contribute to their decisions. This preliminary data has demonstrated a difference amongst attendings compared to residents and students. Despite low probability of survival, residents and students are more likely to select more aggressive management options when compared to attendings. Data obtained after completion of future surveys will show when decision‐making changes, which factors contribute to these changes and their significance in making decisions, and when participants are comfortable addressing EOL care.

      Wheeler, Kellie; The University of Arizona College of Medicine - Phoenix; Panchanathan, Sarada Soumya (The University of Arizona., 2015-04)
      Background and Significance The prevalence of pediatric hypertension (HTN) has increased in the past several decades and is projected to continue to rise.2 Because normal blood pressure (BP) values in children depend on age, sex, and height, HTN is difficult to recognize. If not diagnosed during childhood, HTN poses several long‐term health risks.4,10 Electronic medical records (EMR) have tools to help recognize elevated BP in children. Unfortunately, many clinicians are unaware of these support tools, and pediatric HTN is underdiagnosed. Research Question This study is designed to improve the detection of HTN in children. Methods This is a prospective quality improvement (QI) study completed at a teaching institution with rotating physicians. We reviewed the charts of 1697 children aged 3 to 18 years who were seen by physicians for well‐child visits in March, June, July, August, November 2014, and January 2015. We recorded children with elevated BP and determined if HTN was recognized (noted in the assessment/plan or BP repeated). We used March as our baseline detection rate and completed five interventions, one before each month. All interventions consisted of PowerPoint presentations for medical personnel (physicians, nurses, medical assistants). The last two interventions consisted of a change in the EMR (BP percentiles displayed in a summary page) and signs hung in the clinic. Pre‐ and post‐intervention data underwent analysis, and we examined factors that may impact early detection of HTN. Results Of the 1697 children, 188 (11.1%) had elevated BP. The prevalence of elevated BP declined from the pre‐intervention month to post‐intervention months (March 13.5%, June 10.3%, July 9.7%, August 9.2%, November 12.5%). The prevalence returned to baseline by January (13.5%). The recognition of elevated BP improved from 25% in March to 44% and 55% in June and July, respectively. There was a decline in detection from July to August and November (55% to 41% and 35%). There was improved detection again from November to January (35% to 48%). Factors that increased the detection of HTN were obesity (χ2=22.9, p=0.000002), systolic BP >120 (χ2=8.1, p=0.0045), and a past history of elevated BP (χ2=5.1, p=0.024). Conclusions Our educational interventions improved the absolute detection of HTN. Repetition of interventions and involvement of the whole care team were important for sustaining the improvements, especially for a teaching institution with rotating physicians. Repeated interventions may not be necessary for private practice clinics. The improved detection correlated with a steady decline in the prevalence of HTN, probably related to blood pressures that were falsely elevated due to patient anxiety and incorrect cuff sizing. Obesity, systolic BP>120, and past history of at least one elevated BP significantly improved the detection. This QI project was not intended to determine the efficacy of each intervention, but rather to improve the detection rate as a whole. We cannot conclude whether the monthly changes were due to chance, but we can conclude that we improved the overall detection.

      Scotch, Allison; The University of Arizona College of Medicine - Phoenix; Henry, Michael (The University of Arizona., 2015-04-13)
      Background. Rates of childhood obesity in the United States have risen dramatically in recent decades, with more than 31% of children currently classified as overweight or obese. This raises concerns about the effects of weight on outcomes for pediatric illness, including cancer. There is some evidence of poorer outcomes for pediatric leukemia patients who are overweight or obese, and studies in adults have suggested negative impacts of obesity in numerous cancer types. To date, there are no studies investigating outcomes in overweight and obese children with Hodgkin lymphoma (HL). Our hypothesis was that higher body mass index (BMI) at diagnosis is associated with increased risk for HL relapse. Methods. We conducted a retrospective cohort study of 101 pediatric HL patients treated between 1980 and 2010 at Phoenix Children’s Hospital, a large pediatric oncology referral center in the Southwestern United States. Data was abstracted from electronic and paper medical charts as well as survival clinic follow‐up records. We performed logistic regression and conducted a survival analysis to test whether body mass index (BMI) at diagnosis was associated with time to disease relapse. For this pilot study, we conducted a primary analysis as well as several exploratory secondary analyses with the goal of generating hypotheses to be tested in future large studies of this population. Results. In the primary analysis comparing underweight and normal children to overweight and obese children, none of the patient characteristics – sex, race, age, clinical risk level, or radiation status – were significantly associated with BMI group. In the univariate analysis of HL relapse, children in the overweight/obese group had an increased unadjusted odds ratio of 1.58 (95% CI: 0.50‐5.28), but this was not statistically significant. Exploratory analyses categorizing BMI groups in various ways also suggested an association between increased BMI and risk for HL relapse, though this failed to reach statistical significance. No potential confounders were associated with HL relapse except radiation status (p=0.004), although we were unable to calculate an odds ratio due to a lack of patients in some subgroups. In the survival analysis, radiation was the only variable significantly associated with time to HL relapse. Kaplan‐Meier curves of relapse‐free survival time did not show a significant difference between BMI groups in the primary analysis, but secondary analyses suggested a nonsignificant trend toward decreased long‐term disease‐free survival in patients with higher BMI. Discussion. The relatively small sample size for this pilot study precluded demonstration of statistically significant differences in HL relapse risk or time to relapse between BMI groups. However, exploratory analyses suggested a trend toward increased risk for relapse and shorter disease‐free survival in patients with higher BMI, and these results merit further investigation in larger studies. Multi‐center collaborative studies will be required to attain sufficient sample sizes to accurately assess clinical prognosis in this patient population. Improving our understanding of how BMI affects pediatric cancer outcomes is an important step toward identifying patients at increased risk and determining how best to individualize treatment and monitoring plans for overweight and obese children.

      Reeder, David; The University of Arizona College of Medicine - Phoenix; Beyda, David (The University of Arizona., 2015-04-13)
      The value of an allopathic medical school interview lies in its inherent ability to produce something of value that is unobtainable by other means: a rough assessment of the non‐ cognitive components of a viable candidate. Many allopathic institutions rely on the interview when determining applicant viability for both professional standards and institutional fit. However, applicants can distort the truth or train themselves to appear to exude any one of a number of admirable qualities for a brief period of time. Responses that reflect socially acceptable answers, rather than the true nature of an applicant’s character, represent forms of dishonesty. It is our belief that the high‐stakes setting of a conventional allopathic interview creates a moral hazard for prospective matriculates, such that applicants’ genuine responses are confounded with social desirability bias. Social desirability is often simplified for the research world to refer to the articulation of both self‐deceptive enhancement and impression management (IM). We sought to establish the presence of impression management and/or self‐deceptive enhancement tactics among interviewing allopathic medical school applicants. The presence of the aforementioned was determined using the 6th version of the Balanced Inventory of Desirable Responding (BIDR), a validated inventory that relies on 40 self‐responses on a Likert scale to common situations. We offered the BIDR interview to all interviewing applicants to the University of Arizona College of Medicine ‐ Phoenix on three of the six interview days. This inventory was administered during a 10 minute break period offered directly after the completion of the university’s multiple mini interviews, so as to assess the presence or absence of social desirability as close to the high stakes setting as possible. We received 104 responses, 12 of which were not included in the dichotomous scoring because they were not completed in their entirety. Our findings from 92 allopathic medical school applicant respondents indicated that our average interviewing medical school applicant was engaging in impression management tactics above and beyond the oft‐referenced BIDR cutoff values, with an average of 7.543/20; however, they were not engaging in self‐deceptive enhancement tactics beyond their BIDR reference peers with an average of 6.27/20. Both self‐ deception and impression management exist on a spectrum; however the arbitrary cutoffs of honest impression management established by Paulhaus’ 6th version of the BIDR were exceeded. Our results indicate that the context of allopathic interviews is associated with increased levels of impression management tactics; conversely, it is not associated with increased self‐deceptive enhancement tactics.

      Raza, Ali; The University of Arizona College of Medicine - Phoenix; Ernst, Kacey (The University of Arizona., 2015-04-13)
      Background: Dengue fever is the most common mosquito borne viral disease in the world. Its symptoms can be fairly nonspecific and most commonly include fever, rash, headache, and eye pain. Passive surveillance is currently the most prevalent method used to detect dengue cases in the United States. Identification of positive cases can be limited by the public’s awareness of the disease’s symptoms, barriers to healthcare seeking behavior, and by physician approval of laboratory testing. Objective: This study sought to evaluate barriers to dengue reporting, as well as the patient‐ level factors that may limit the efficacy of passive surveillance of dengue in Key West, Florida. Methods: Cross‐sectional surveys were administered across Key West, FL. Subjects were asked if they had a recent fever, additional dengue symptoms, and whether they sought medical care for these symptoms. Also the hypothetical question was posed: would you seek medical care for a fever greater than 102 F? Responses were stratified according to patient characteristics and demographics. Results: In Key West, patient‐level factors that influenced the decision to seek medical care for a high fever were: having a specific doctor call when sick (p<0.006), health insurance status (p<0.037), and ethnicity (p<0.005). Additionally, barriers to dengue reporting were identified. The most impactful were the decision to seek medical care for symptoms consistent with dengue fever, and the doctor’s decision to administer confirmatory dengue laboratory tests. Only one person with a recent fever plus one additional classic dengue symptom received laboratory testing, and this was done outside of the United States. There were four individuals who met the current WHO clinical case definition for dengue, yet none were offered laboratory testing or were diagnosed with the disease. Conclusion: This study shows that both patients and doctors in Key West, Florida underestimate the potential for dengue when there are symptoms consistent with the disease. As such, it is certainly possible that there have been unreported cases in the country.

      Parsons, Christine; The University of Arizona College of Medicine - Phoenix; Yaari, Roy; Dougherty, Jan (The University of Arizona., 2015-04-13)
      Memory screening in the community promotes early detection of memory problems, as well as Alzheimer’s disease (AD) and related illnesses, and encourages appropriate intervention. The Montreal Cognitive Assessment (MoCA) is a rapid and sensitive screening tool for cognitive impairment that can be readily employed at the clinical level, but little is known about its utility as a community screening tool. Also, little is known regarding the demographics of the population that presents for a community screen. The research aims to evaluate the demographics of the participants that attended community memory screens in the greater Phoenix metropolitan area and to evaluate the prevalence of screen positives using the MoCA. It is hypothesized that cognitive impairment will be significantly prevalent in the screened population and that age and family history of dementia will correlate with the presence of cognitive impairment. The study methods involve descriptive analysis and application of statistical tests to evaluate for significant relationships between demographic variables and MoCA scores. The population (n=346) had a mean age of 72 (SD =10.7), was primarily female (70%), primarily Caucasian (68%) and 86% had greater than a high school education. A 58% prevalence of cognitive impairment was found in the population as defined by the MoCA. Increased age, male gender, and non‐Caucasian race correlated with lower MoCA scores. Lower education correlated with lower MoCA scores despite the inherent educational correction in the MoCA. Diabetes and a family history of AD were not significant factors. Although the number of true positives following methodical diagnosis is unknown, given the validity of the MoCA in discerning cognitive impairment, the screen was likely worthwhile and supports more routine use of community memory screens. Variables identified that were associated with increased cognitive impairment better describe the population at risk and can be utilized to focus future screening efforts.

      Parrish, Ashley; The University of Arizona College of Medicine - Phoenix; Bulloch, Blake (The University of Arizona., 2015-04-13)
      Background: Pain scales developed for children were noted not to be useful or practical in an ambulance, and EMS providers have been found to use non‐standardized measures of pain severity in children. A recently published evidence‐based guideline recommends using pictorial scales (PS) for patients aged 4‐12 years, and observational‐behavioral scales (OBS) for younger patients. Objectives were to assess EMS providers’ baseline knowledge, self‐reported practices, self‐efficacy for treating pain in children, and preference for pediatric pain scales. Methods: A survey and education module were administered to a convenience sample of EMS providers from four agencies within a large metropolitan area. Providers answered 20 Likert scale items, received a 15‐minute didactic on pain assessment in children, and then answered four additional survey items. Results: There were 397 surveys returned (80% of providers receiving didactic). Six‐tenths of providers had practiced >10 years, 99% were EMT‐P, and 91% were male. 88% reported feeling “Very‐Extremely” comfortable measuring pain severity in adults; 38% reported the same in children. 57% reported having been trained on the use of pain scales in children; 46% were at least “Moderately” familiar with any PS and 24% with any OBS. While 44% assessed their current practice as “Sometimes‐Always” using pediatric scales, <25% of providers reported carrying paper or electronic copies of pain scales. 75% reported using their own observation to assess pain “Most of the Time‐Always.” Self‐efficacy results for utilizing pain protocols and measuring pain scores for 8‐year and 36‐month patients revealed 68% and 48% were at least “Mostly” certain they could perform correctly. After education about pediatric pain scales, 41% and 31% reported they would be more than “Somewhat” likely to use PS or OBS, respectively. Conclusion: A sample of EMS providers reported a high level of discomfort assessing pain in children, a moderate prevalence of training, and a low familiarity with existing pediatric pain scales. Most use general impression to assess pain instead of pain scales. After education, the minority of providers reported likelihood of incorporating these tools into their practice. This is an important barrier to adoption of the evidence‐based guideline for management of acute traumatic pain.

      OShea, Michele; The University of Arizona College of Medicine - Phoenix; Tang, Jennifer (The University of Arizona., 2015-04-13)
      Background and Significance: Both HIV and unintended pregnancies have been associated with adverse maternal, perinatal, and infant outcomes. Malawi is a country with both high HIV prevalence and rates of unintended pregnancy, where 13% of women aged 15‐49 years have HIV, and 41% of pregnancies are unintended. Research Question: The objectives of this study were to describe the most recent pregnancy intentions and family planning preferences of HIV‐infected and HIV‐uninfected postpartum Malawian women, and to assess whether HIV status is associated with fertility desire and knowledge of intrauterine contraception (IUC) and the subdermal contraceptive implant. Methods: We conducted a cross‐sectional analysis of the baseline characteristics of Malawian women enrolled in a prospective cohort study assessing postpartum contraceptive uptake and continuation. Women at a government hospital completed a baseline survey assessing reproductive history, family planning preferences, and knowledge of IUC and the implant. We used Pearson’s chi‐square tests to compare these parameters between HIV‐infected and HIV‐uninfected women. Modified Poisson regression was performed to assess the association between HIV status and fertility desire and knowledge about IUC and the implant. Results: Of 634 postpartum women surveyed, HIV‐infected women were more likely to report their most recent pregnancy was unintended (49% versus 37%, p=0.004). Nearly all women (97%) did not want a child in the next two years but HIV‐infected women were more likely to desire no more children (adjusted PR: 1.59; 95% CI: 1.33, 1.89). HIV‐ infected women were also less likely to know that IUC (adjusted PR 0.72; 95% CI: 0.61, 0.84) and the implant (adjusted PR 0.83; 95% CI: 0.75, 0.92) are safe during breastfeeding. Conclusion: Postpartum women strongly desire family spacing and many HIV‐infected postpartum women desire no more children, suggesting an important role for these long‐acting methods. Education about the efficacy and safety of IUC and the implant particularly during breastfeeding may facilitate postpartum use.

      Morshed, Trisha; The University of Arizona College of Medicine - Phoenix; Jacobson, Sandra (The University of Arizona., 2015-04-13)
      Objective: Formed visual hallucinations are a common phenomenon in neurodegenerative disorders such as Parkinson’s Disease (PD), Alzheimer’s disease (AD) and Dementia with Lewy bodies (DLB). While Lewy‐type alpha‐synucleinopathy (LTSis the hallmark neuropathological finding in PD and DLB, amyloid plaques and neurofibrillary tangles are the pathological finding in AD. Previous research has linked complex or formed visual hallucinations (VH) to LTS in neocortical and limbic areas in patients with PD and DLB. As VH also occur in Alzheimer’s disease, and AD pathology often co‐occurs with LTS, we questioned whether this pathology might also be linked to VH. Methods: We performed a semi‐quantitative neuropathological study across brainstem, limbic, and cortical structures in subjects with a documented clinical history of VH and a clinicopathological diagnosis of Parkinson’s disease (PD), Alzheimer’s disease (AD), or dementia with Lewy bodies (DLB). 173 subjects – including 50 with VH and 123 without VH – were selected from the Arizona Study of Aging and Neurodegenerative Disorders. Clinical variables examined included the Mini‐mental State Exam, Hoehn & Yahr stage, and total dopaminergic medication dose. Neuropathological variables examined included total and regional LTS and plaque and tangle densities. Results: A significant relationship was found between the density of LTS and the presence of VH in all diagnostic groups. Plaque and tangle densities also were associated with VH in PD (p=.003 for plaque and p=.004 for tangles), but not in AD, where densities were high regardless of the presence of hallucinations.. Conclusion: Plaques and tangles as well as LTS may contribute to the pathogenesis of VH. Incident VH may be a clinical indicator of underlying pathological events: the development of plaques and tangles in patients with PD, and LTS in patients with AD.

      Lukefahr, Ashley Leigh; The University of Arizona College of Medicine - Phoenix; Funk, Janet (The University of Arizona., 2015-04-13)
      Zoledronic acid (ZA), the gold standard treatment for breast cancer‐derived osteolytic bone lesions, induces apoptosis in mature osteoclasts. Curcumin, a plant‐dervied component of turmeric (Curcuma longa), inhibits osteoclast differentiation. This study aimed to determine the in vitro and in vivo effects of ZA and curcuminoids, alone and combined, on osteoclast differentiation and survival, breast cancer cell growth, breast cancer cell‐induced osteolytic bone lesion area, and bone mineral density (BMD). Curcuminoids, but not ZA, inhibited osteoclast formation at doses that did not alter precursor viability, as assessed by osteoclastogenesis assays using murine RAW 264.7 cells. Combined curcuminoids and ZA did not differ from curcuminoids alone in their effects on osteoclast survival/formation. The half maximal inhibitory concentration (IC50) for ZA alone was 4 μM, while the IC50 for curcuminoids plus ZA was 6μM. Curcuminoids and ZA inhibit in vitro cell viability of human breast cancer‐ derived MDA‐MB‐231 cells, as assessed by MTT assays. The IC50 of ZA alone was projected to be 1.0677 x 10^4 μM, while the IC50 for curcuminoids alone (9.1 x 10^1 μM), was close to the IC50 for curcuminoids plus ZA (1.31 x 10^2 μM curcuminoids with 300 μM ZA). In vivo effects of ZA (2 μg/kg/d) and curcuminoids (25 mg/kg/d), alone and combined, on osteolytic bone lesions dervied from innoculation with MDA‐MB‐231 cells were assessed. Radiographically‐evident osteolytic bone lesion area did not differ between treatment groups, with a trend towards decreased osteolytic lesion area in mice treated with ZA. BMD In non‐responders, without bone or pericardiac tumors, assessed by dual energy x‐ray absorptiometry, was increased in mice administered ZA. Thus, for the first time, the combined in vitro effects of ZA and curcuminoids on osteclast formation and survival were demonstrated, as well as the combined effects of ZA and curcuminoids on bresat cancer‐derived osteolytic bone lesions and BMD.

      Little, Colin; The University of Arizona College of Medicine - Phoenix; Sarko, John (The University of Arizona., 2015-04-10)
      Background and Significance: The i‐STAT point of care blood analyzer is a handheld device used for a variety of laboratory analyses in medical settings. Much research has been performed to evaluate its validity, but it has not been exhaustively tested in real‐world emergency department settings, despite its increasingly popular use in such settings. Methods: We retrospectively examined medical records at the Maricopa Integrated Health Systems Emergency Department to find 100 instances between February 2014 and September 2014 in which a patient had electrolyte testing performed on both the i‐STAT and in the central laboratory within a 60 minute timeframe. These data were examined using variance of means and Bland‐Altman graphing for equivalency. Results: We set the clinical equivalence threshold for each lab to be 5% of the mean normal value. That is, if the i‐STAT differed from central lab by less than 5% of the middle of the normal range (137‐145 for sodium, 5% of which is 7) then we consider them to be clinically equivalent. At this level we were unable to show clinical equivalence. In additional, all electrolytes tested showed small but significant bias between the i‐STAT and the central laboratory. Re‐examination of the data excluding all measurements more than 15 minutes apart showed similar findings. Conclusions: At this time we cannot show equivalency between the i‐STAT device and the central laboratory when used under real‐life emergency department conditions. More research is needed is to support or refute these findings.

      Kim, Nathan; The University of Arizona College of Medicine - Phoenix; Brachman, David (The University of Arizona., 2015-04)
      Significance: Contrast‐enhanced (CE) and Fluid attenuation inversion recovery (FLAIR) MRI are current standard of care tools for delineating radiation treatment targets in high‐grade glioma (HGG) patients. However, in the setting of retreatment, tumor regrowth and non‐tumor therapy‐related inflammation, known as post‐treatment radiation effect (PTRE), have identical MRI appearances. As a result, FLAIR MRI can be an unreliable tool for treatment planning. Surgical biopsy can definitively distinguish recurrent tumors from PTRE but has many disadvantages, namely operative risk and cost. Dynamic Susceptibility‐weighted Contrast‐ Enhanced (DSC) MRI Perfusion can non‐invasively detect distinct characteristics of tumor and PTRE through measurements of relative cerebral blood volume (rCBV). PTRE exhibits decreased microvascular density, whereas tumor recurrence displays angiogenesis and microvascular proliferation. Thus, DSC‐MRI affords the opportunity to better define tumor burden within and possibly outside of these nonspecific regions. Objective: To assess the extent with which rCBV maps correlate with re‐radiation treatment plans in patients with recurrent tumor in order to identify potential differences in treatment planning. Design: This study enrolled 8 previously treated HGG patients presenting for re‐irradiation of suspected recurrent tumor at a single hospital on an IRB‐approved trial. All patients underwent DSC‐MRI and routine MRI imaging prior to re‐irradiation treatment planning, and underwent treatment as per routine clinical protocol. Following therapy, rCBV and radiation dose maps were overlaid on conventional MR to delineate differences in identified tumor burden. Results: Of the 8 patients, four rCBV images showed evidence of tumor outside of the RT planning volumes, while the other 4 showed fully treated tumor but with large volumes of uninvolved brain receiving radiation. Conclusion: DSC‐MRI better identified unique regions of potential tumor burden in recurrent HGG patients compared to conventional MRI and could be used to improve radiation treatment planning in re‐radiated patients.

      Jugler, Tanner; The University of Arizona College of Medicine - Phoenix; Hartmark-Hill, Jennifer (The University of Arizona., 2015-04-10)
      This pilot project explores medical student preference regarding simulation education in case based instruction (CBI) compared with the traditional Power Point lecture CBI. The study population consisted of volunteer first, second, third, and fourth year medical students. The subjects were randomized into control (traditional CBI) and intervention (simulation CBI) groups and preference data was collected via pre‐ and post‐survey administered before and after the activity. Preference was limited to enjoyment of learning activity and opinion of benefit on exams of the learning activity. T‐tests were applied to the data in order to determine statistical significance. Enjoyment of the simulation activity was determined to be higher post‐simulation activity in the intervention group compared to the control group. While opinion that simulation CBI may be beneficial in regard to exam scores and knowledge retention was above neutral for the two groups, this study did not determine a significance in opinion between the control and intervention groups. The study results suggest that students who have experienced a simulation CBI enjoy them more compared to the traditional CBI and are more in favor of changing the current model of case‐based instruction.

      Hintzen, Calliandra; The University of Arizona College of Medicine - Phoenix; Quan, Dan (The University of Arizona., 2015-04-10)
      Renal stones (or “calculi”) are a relatively common condition, affecting up to 12 percent of people during their lifetime. Typical presentation of renal calculi is acute, intermittent flank pain, termed “renal colic”, which may radiate to the groin. Pain may be accompanied by hematuria, nausea, or vomiting.1 Acute renal colic is a common cause for presentation to the Emergency Department, accounting for an estimated 1 million emergency room visits annually in the United States.2 The severe pain associated with renal calculi requires immediate analgesia, and effective analgesia is associated with improved functional capacity after drug administration.3 In this trial, we compare the efficacy of IV ketorolac vs. IV ibuprofen for pain control in patients with renal colic in a three‐armed double‐blind prospective trial. Patients were randomized to one of three treatment groups, receiving parenteral infusions of either IV ibuprofen + morphine, IV ketorolac + morphine, or morphine monotherapy. Outcome of drug administration was measured by patients’ self‐assessment of pain on a verbal scale at 15 mins, 30 mins, 60 min, and 120 min after drug administration. We hypothesized that IV ibuprofen would provide effective, non‐opioid pain relief in the emergency setting and might have a lower incidence of adverse effects than ketorolac. Need for rescue analgesia (with 4 mg morphine) was observed as an indirect measure of analgesic efficacy. A total of 11 patients completed the study. There was no significant difference in area under the curve of pain score in any of the three treatment arms (p>0.4). The ibuprofen group demonstrated consistent improvement in pain over the course of 120 min of study, with 100% of the patients in that arm demonstrating downtrending pain scores. Though the sample size was too small to identify a statistically significant difference in need for rescue medication, there was a trend toward increased opioid in the ibuprofen group, with 50% of those participants receiving rescue analgesia with morphine. The sample size of this pilot study is inadequate to fully assess the analgesic efficacy of IV ibuprofen for renal colic. A trend toward improved pain control in the ibuprofen group was observed, with 100% of the patients in the ibuprofen arm reporting decreased pain after 120 minutes (as compared to 66% in the ketorolac arm and 75% in the placebo arm). Further study of efficacy and need for rescue analgesia is warranted.

      Hickle, Kelli; The University of Arizona College of Medicine - Phoenix; Shennib, Hani (The University of Arizona., 2015-04-10)
      In 2005, Centers for Medicare and Medicaid Services (CMS) expanded eligibility criteria for patients to receive carotid artery stenting (CAS) as an alternative to carotid endarterectomy (CEA). The goal of this study is to examine the outcomes of wider utilization of CAS and the sustainability of favorable outcomes reported in pre-marketing FDA approval trials. A cohort of 169 patients undergoing either CEA or CAS, at two institutions with CMS approval, and experience in both procedures was retrospectively analyzed for outcomes and their determinants. From 2007 to 2012, patients underwent either CEA (n = 70) or CAS (n = 99) at one of two institutions selected for study. The two groups had similar baseline characteristics with the exception of more symptomatic patients in the CEA group (55.7% CEA vs. 17.2 % CAS; p < 0.001) and previous stroke or transient ischemic attack (TIA) (72.5% CEA vs. 39.4%; p < 0.001). Lesion characteristics between the two groups differed in terms of the presence of a thrombus (16.0% CEA vs. 2.02% CAS; p = 0.002), stenosis 2: 80% (69.6% CEA vs. 95.0% CAS; p < 0.001) and length of lesion 2: 2cm (70.6% CEA vs. 24.4% CAS; p < 0.001). Major adverse events within 30-days of the procedure were higher in the CAS group (0% CEA vs. 5.05% CAS; p = 0.077), but not statistically significant. Three of the five patients who suffered a major adverse event had a stroke; all three patients were 2:80 years old, asymptomatic, and had 2: 80% stenosis. Acute neurologic events, including strokes and TIAs, were higher in the CAS group (1.4% CEA vs. 12.1% CAS; p = 0.016). No myocardial infarction occurred in either group. Minor adverse events occurred in 49.5% of CAS and 7.1% of CEA patients (p < 0.001). When total minor adverse events were subdivided by event type and analyzed, only hemodynamic instability was significantly different between the CEA and CAS group (1.4% CEA versus 41.4% CAS; p < 0.001). Asymptomatic CAS patients with high-grade stenosis (2: 80%) had more hemodynamic instability (p < 0.001) compared to matched CEA patients. Hemodynamic instability in CAS procedures correlated with embolic protection device performance issues (p = 0.004), successful stent placement (p = 0.018), and post-stenting dilation (p < 0.001). Post-market, real-world utilization of CAS results in higher rates of neurologic events and hemodynamic instability. However, overall stroke rate was comparable to that reported in pre-marketing FDA approval trials. CEA or CAS can be offered with a stroke rate ≤3% at institutions with large experience. CEA should remain the procedure of choice for asymptomatic carotid stenosis, particularly in elderly patients. Hemodynamic instability is a minor, but important, adverse event associated with CAS and should be further investigated.

      Fortin Ensign, Shannon Patricia; The University of Arizona College of Medicine - Phoenix; Tran, Nhan (The University of Arizona., 2015-04-10)
      Glioblastoma (GB) is the highest grade and most common form of primary adult brain tumors, characterized by a highly invasive cell population. GB tumors develop treatment resistance and ultimately recur; the median survival is nearly fifteen months and importantly, the invading cell population is attributed with having a decreased sensitivity to therapeutics. Thus, there remains a necessity to identify the genetic and signaling mechanisms that promote tumor spread and therapeutic resistance in order to develop new targeted treatment strategies to combat this rapidly progressive disease. TWEAK-Fn14 ligand-receptor signaling is one mechanism in GB that promotes cell invasiveness and survival, and is dependent upon the activity of multiple Rho GTPases including Rac1. Here, we show that Cdc42 is essential in Fn14-mediated Rac1 activation. We identified two guanine nucleotide exchange factors (GEFs), Ect2 and Trio, involved in the TWEAK-induced activation of Cdc42 and Rac1, respectively, as well as in the subsequent TWEAK-Fn14 directed glioma cell migration and invasion. In addition, we characterized the role of SGEF in promoting Fn14-induced Rac1 activation. SGEF, a RhoG-specific GEF, is overexpressed in GB tumors and promotes TWEAK- Fn14-mediated glioma invasion. Moreover, we characterized the correlation between SGEF expression and TMZ resistance, and defined a role for SGEF in promoting the survival of glioma cells. SGEF mRNA and protein expression are regulated by the TWEAK-Fn14 signaling axis in an NF-B dependent manner and inhibition of SGEF expression sensitizes glioma cells to TMZ treatment. Lastly, gene expression analysis of SGEF depleted GB cells revealed altered expression of a network of DNA repair and survival genes. Thus TWEAK-Fn14 signaling through the GEF-Rho GTPase systems which include the Ect2, Trio, and SGEF activation of Cdc42 and/or Rac1 presents a pathway of attractive drug targets in glioma therapy, and SGEF signaling represents a novel target in the setting of TMZ refractory, invasive GB cells.

      Fegas, Rebecca K.; The University of Arizona College of Medicine - Phoenix; Driver, Jane (The University of Arizona., 2015-04-10)
      Background: The International Prognostic Scoring System (IPSS) for myelodysplastic syndrome (MDS) is commonly used to predict survival and assign treatment. We explored whether markers of frailty add prognostic information to the IPSS in a cohort of older patients. Design, Setting, Participants: Retrospective cohort study of 114 MDS patients ≥ age 65 who presented to Dana‐Farber Cancer Institute between 2006‐2011 and completed a baseline quality of life questionnaire. Measurements: We evaluated questions corresponding to frailty and extracted clinical‐ pathologic data from medical records. We used Kaplan‐Meier and Cox proportional hazards models to estimate survival. Results: 114 patients consented and were available for analysis. The median age was 72.5 years, and the majority of patients were white ( 94.7%), male ( 74.6%), and over half had a Charlson comorbidity score < 2. Few patients ( 23.7%) had an IPSS score consistent with low‐risk disease and the majority received chemotherapy. In addition to traditional prognostic factors (IPSS score and history of prior chemotherapy or radiation), significant univariate predictors of survival included low serum albumin, Charlson score, the ability to take a long walk, and interference of physical symptoms in family life. The multivariate model that best predicted mortality included low serum albumin (HR=2.3; 95%CI: 1.06‐5.14), previous chemotherapy or radiation (HR=2.1; 95%CI: 1.16‐4.24), IPSS score (HR=1.7; 95%CI: 1.14‐2.49), and ease taking a long walk (HR=0.44; 95%CI: 0.23‐0.90). Conclusions: In this study of older adults with MDS, we found that markers of nutritional status and self‐reported physical function added important prognostic information to the IPSS score. More comprehensive risk assessment tools for older patients with MDS that include markers of function and frailty are needed.