Measurement characteristics of a concept classification exam using multiple case examples: A Rasch analysis
AffiliationUniv Arizona, Coll Publ Hlth, Div Community Environm & Policy
Univ Arizona, Coll Pharm, Dept Pharm Practice & Sci
MetadataShow full item record
PublisherELSEVIER SCIENCE INC
CitationJennings, N. B., Slack, M. K., Mollon, L. E., & Warholak, T. L. (2016). Measurement characteristics of a concept classification exam using multiple case examples: A Rasch analysis. Currents in Pharmacy Teaching and Learning, 8(1), 31-38.
Rights© 2015 Elsevier Inc. All rights reserved.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at email@example.com.
AbstractObjective: To determine if an exam using multiple cases to test research design concepts measured only one cognitive skill, concept classification, and to determine if item difficulty varied according to the research design used for the case. Methods: The exam consisted of 50 multiple choice items associated with five example abstracts: a randomized controlled trial, pretest-posttest, crossover, retrospective cohort, and descriptive designs. A Rasch analysis was conducted to determine dimensionality (i.e., measured a single skill). Items were stratified by design to explore the relationship between item difficulty and study design. Overall difficulty was assessed using an item person map. Results: The exam was administered to 101 students; the mean was 88.4% (mean score = 44.2; SD = 3.5). The Rasch analysis indicated the exam primarily measured one cognitive skill, presumably concept classification. The stratified analysis indicated that overall no single research design was more difficult than other designs; however, the type of research design and item topic interacted so that an easy item for one design could be difficult when associated with a different study design. Conclusions: The exam appeared to function more like a mastery exam documenting that most students performed well rather than as an exam for ranking students by ability. That item topic interacted with study design to affect item difficulty, indicates that items on the same topic are needed to test basic design concepts across study designs. (C) 2015 Elsevier Inc. All rights reserved.
Note12 month embargo; available online 6 November 2015.
VersionFinal accepted manuscript