A COMPARISON OF THE PERFORMANCES OF REEVALUATED AND NEWLY REFERRED LEARNING-DISABLED STUDENTS AND NEWLY REFERRED NON - LEARNING-DISABLED STUDENTS ON THE WECHSLER INTELLIGENCE SCALES FOR CHILDREN - REVISED AND THE WOODCOCK-JOHNSON TESTS OF COGNITIVE ABILITY.
AuthorCONROY, DAVID S.
KeywordsWechsler Intelligence Scale for Children.
Woodcock-Johnson Tests of Cognitive Ability.
Learning disabled children.
Committee ChairSabers, Darrell
MetadataShow full item record
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractThere has been much controversy concerning the comparability of the Wechsler Intelligence Scales for Children-Revised (WISC-R) and the Woodcock-Johnson Tests of Cognitive Ability (WJTCA). Previous research has raised the issue of a mean score discrepancy between the tests when used with the learning disabled. This study analyzed and compared performances on these two tests by re-evaluated and newly referred LD students and newly referred non-LD students. In addition, subtypes of LD students were formed on the basis of achievement test scores. These students' test performances were also analyzed and compared. The results of this study were consistent with previous research. The Full Scale scores from the two tests were highly correlated in all three groups, but the WISC-R was significantly higher than the WJTCA for each group. Across the identified LD subtypes there was a significant difference between the Full Scale scores from the two tests. However, meaningful patterns of strengths and weaknesses across aspects of cognitive functioning were not uncovered. These results indicate that the WISC-R and WJTCA result in significantly different estimates of the cognitive ability of LD and referred students. This difference can be attributed to a combination of three possible explanations--the effects of the use of non-random samples, the use of different norm groups when the tests were standardized, and the tests contain different content.
Degree ProgramEducational Foundations and Administration