Item response theory evaluation of the Light and Spectroscopy Concept Inventory national data set
AffiliationUniv Arizona, Steward Observ, Dept Astron
MetadataShow full item record
PublisherAMER PHYSICAL SOC
CitationWallace, C. S., Chambers, T. G., & Prather, E. E. (2018). Item response theory evaluation of the Light and Spectroscopy Concept Inventory national data set. Physical Review Physics Education Research, 14(1), 010149. https://doi.org/10.1103/PhysRevPhysEducRes.14.010149
RightsPublished by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at email@example.com.
Abstract[This paper is part of the Focused Collection on Astronomy Education Research.] This paper presents the first item response theory (IRT) analysis of the national data set on introductory, general education, college-level astronomy teaching using the Light and Spectroscopy Concept Inventory (LSCI). We used the difference between students' pre- and postinstruction IRT-estimated abilities as a measure of learning gain. This analysis provides deeper insights than prior publications both into the LSCI as an instrument and into the effectiveness of teaching and learning in introductory astronomy courses. Our IRT analysis supports the classical test theory findings of prior studies using the LSCI with this population. In particular, we found that students in classes that used active learning strategies at least 25% of the time had average IRTestimated learning gains that were approximately 1 logit larger than students in classes that spent less time on active learning strategies. We also found that instructors who want their classes to achieve an improvement in abilities of average Delta theta= 1 logit must spend at least 25% of class time on active learning strategies. However, our analysis also powerfully illustrates the lack of insight into student learning that is revealed by looking at a single measure of learning gain, such as average Delta theta. Educators and researchers should also examine the distributions of students' abilities pre- and postinstruction in order to understand how many students actually achieved an improvement in their abilities and whether or not a majority of students have moved to postabilities significantly greater than the national average.
NoteOpen access journal.
VersionFinal published version
SponsorsNational Science Foundation