Show simple item record

dc.contributor.authorLevin, Joel R.
dc.contributor.authorFerron, John M.
dc.contributor.authorGafurov, Boris S.
dc.date.accessioned2017-11-02T23:51:28Z
dc.date.available2017-11-02T23:51:28Z
dc.date.issued2017-08
dc.identifier.citationAdditional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types 2017, 63:13 Journal of School Psychologyen
dc.identifier.issn00224405
dc.identifier.doi10.1016/j.jsp.2017.02.003
dc.identifier.urihttp://hdl.handle.net/10150/625957
dc.description.abstractA number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
dc.language.isoenen
dc.publisherPERGAMON-ELSEVIER SCIENCE LTDen
dc.relation.urlhttp://linkinghub.elsevier.com/retrieve/pii/S0022440517300171en
dc.rights© 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.en
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectSingle-case intervention researchen
dc.subjectMultiple-baseline designen
dc.subjectRandomization statistical testsen
dc.subjectAlternative effect typesen
dc.titleAdditional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect typesen
dc.typeArticleen
dc.contributor.departmentUniversity of Arizonaen
dc.identifier.journalJournal of School Psychologyen
dc.description.note24 month embargo; Available online 9 March 2017.en
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.en
dc.eprint.versionFinal accepted manuscripten
html.description.abstractA number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.


Files in this item

Thumbnail
Name:
Levin_randomization_final.pdf
Size:
996.1Kb
Format:
PDF
Description:
Final Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record