Comparison of randomization-test procedures for single-case multiple-baseline designs
AffiliationUniv Arizona, Dept Educ Psychol
KeywordsSingle-case multiple-baseline designs
scientifically credible methodology
MetadataShow full item record
PublisherTAYLOR & FRANCIS INC
CitationJoel R. Levin, John M. Ferron & Boris S. Gafurov (2018) Comparison of randomization-test procedures for single-case multiple-baseline designs, Developmental Neurorehabilitation, 21:5, 290-311, DOI: 10.1080/17518423.2016.1197708
RightsCopyright © 2018 Taylor & Francis
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at email@example.com.
AbstractIn three simulation investigations, we examined the statistical properties of several different randomization-test procedures for analyzing the data from single-case multiple-baseline intervention studies. Two procedures (Wampold-Worsham and Revusky) are associated with single fixed intervention start points and three are associated with randomly determined intervention start points. Of the latter three, one (Koehler-Levin) is an existing procedure that has been previously examined and the other two (modified Revusky and restricted Marascuilo-Busk) are modifications and extensions of existing procedures. All five procedures were found to maintain their Type I error probabilities at acceptable levels. In most of the conditions investigated here, two of the random start-point procedures (Koehler-Levin and restricted Marascuilo-Busk) were more powerful than the others with respect to detecting immediate abrupt intervention effects. For designs in which it is not possible to include the same series lengths for all cases, either the modified Revusky or restricted Marascuilo-Busk procedure is recommended.
Note12 month embargo; published online: 01 July 2016
VersionFinal accepted manuscript