Now showing items 19270-19289 of 19787

    • Vagueness and Borderline Cases

      Horgan, Terry; Lavine, Shaughan; Daly, Helen; Sartorio, Carolina (The University of Arizona., 2011)
      Vagueness is ubiquitous in natural language. It seems incompatible with classical, bivalent logic, which tells us that every statement is either true or false, and none is vaguely true. Yet we do manage to reason using vague natural language. In fact, the majority of our day-to-day reasoning involves vague terms and concepts. There is a puzzle here: how do we perform this remarkable feat of reasoning? I argue that vagueness is a kind of semantic indecision. In short, that means we cannot say exactly who is bald and who is not because we have never decided the precise meaning of the word 'bald'--there are some borderline cases in the middle, which might be bald or might not. That is a popular general strategy for addressing vagueness. Those who use it, however, do not often say what they mean by 'borderline case'. It is most frequently used in a loose way to refer to in-between items: those people who are neither clearly bald nor clearly not bald. But under that loose description, the notion of borderline cases is ambiguous, and some of its possible meanings create serious problems for semantic theories of vagueness.Here, I clarify the notion of a borderline case, so that borderline cases can be used profitably as a key element in a successful theory of vagueness. After carefully developing my account of borderline cases, I demonstrate its usefulness by proposing a theory of vagueness based upon it. My theory, vagueness as permission, explains how classical logic can be used to model even vague natural language.
    • Valid-time indeterminacy.

      Dyreson, Curtis Elliott.; Snodgrass, Richard T.; Downey, Peter J.; Peterson, Larry (The University of Arizona., 1994)
      In valid-time indeterminacy, it is known that an event stored in a temporal database did in fact occur, but it is not known exactly when the event occurred. We extend a tuple-timestamped temporal data model to support valid-time indeterminacy and outline its implementation. This work is novel in that previous research, although quite extensive, has not studied this particular kind of incomplete information. To model the occurrence time of an event, we introduce a new data type called an indeterminate instant. Our thesis is that by representing an indeterminate instant with a set of contiguous chronons and a probability distribution over that set, it is possible to characterize a large number of (possibly weighted) alternatives, to devise intuitive query language constructs, including schema specification, temporal constants, temporal predicates and constructors, and aggregates, and to implement these constructs efficiently. We extend the TQuel and TSQL2 query languages with constructs to retrieve information in the presence of indeterminacy. Although the extended data model and query language provide needed modeling capabilities, these extensions appear to carry a significant execution cost. The cost of support for indeterminacy is empirically measured, and is shown to be modest. We then show how indeterminacy can provide a much richer modeling of granularity and now. Granularity is the unit of measure of a temporal datum (e.g., days, months, weeks). Indeterminacy and granularity are two sides of the same coin insofar as a time at a given granularity is indeterminate at all finer granularities. Now is a distinguished temporal value. We describe a new kind of instant, a now-relative indeterminate instant, which has the same storage requirements as other instants, but can be used to model situations such as that an employee is currently employed but will not work beyond the year 1995. In summary, support for indeterminacy dramatically increases the modeling capabilities of a temporal database without adversely impacting performance.
    • Validating a Neonatal Risk Index to Predict Necrotizing Enterocolitis

      Effken, Judith A.; Gephart, Sheila Maria; Reed, Pamela G.; Jones, Elaine G.; Halpern, Melissa D.; Effken, Judith A. (The University of Arizona., 2012)
      Necrotizing enterocolitis (NEC) is a costly and deadly disease in neonates. Composite risk for NEC is poorly understood and consensus has not been established on the relevance of risk factors. This two-phase study attempted to validate and test a neonatal NEC risk index, GutCheck(NEC). Phase I used an E-Delphi methodology in which experts (n=35) rated the relevance of 64 potential NEC risk factors. Items were retained if they achieved predefined levels of expert consensus or stability. After three rounds, 43 items were retained (CVI=.77). Qualitative analysis revealed two broad themes: individual characteristics of vulnerability and the impact of contextual variation within the NICU on NEC risk. In Phase II, the predictive validity of GutCheck(NEC) was evaluated using a sample from the Pediatrix BabySteps Clinical Data Warehouse (CDW). The sample included infants born<1500 grams, before 36 weeks, and without congenital anomalies or spontaneous intestinal perforation (N=58,818, of which n=35,005 for empiric derivation and n=23,813 for empiric validation). Backward stepwise likelihood-ratio method regression was used to reduce the number of predictive factors in GutCheck(NEC) to 11 and derive empiric weights. Items in the final GutCheck(NEC) were gestational age, history of a transfusion, NICU-specific NEC risk, late onset sepsis, multiple infections, hypotension treated with Inotropic medications, Black or Hispanic race, outborn status, metabolic acidosis, human milk feeding on both day 7 and day 14 (reduces risk) and probiotics (reduces risk).Discrimination was fair in the case-control sample (AUC=.67, 95% CI .61-.73) but better in the validation set (AUC=.76, 95% CI .75-.78) and best for surgical NEC (AUC=.84, 95% CI .82-.84) and infants who died from NEC (AUC=.83, 95% CI .81-.85). A GutCheck(NEC) score of 33 (range 0-58) yielded a sensitivity of .78 and a specificity of .74 in the validation set. Intra-individual reliability was acceptable (ICC (19) =.97, p<.001). Future research is needed to repeat this procedure in infants between 1500 and 2500 grams, complete psychometric testing, and explore unit variation in NEC rates using a comprehensive approach.
    • Validating and Testing A Model to Predict Adoption of Electronic Personal Health Record Systems in the Self-Management of Chronic Illness in the Older Adult

      Effken, Judith A.; Logue, Melanie D.; Reed, Pamela G.; Murdaugh, Carolyn (The University of Arizona., 2011)
      Problem statement: As a result of the aging population, the number of people living with chronic disease has increased to almost 50% (CDC, 2004). Two of the main goals in treating patients with chronic diseases are to provide seamless care from setting to setting and prevent disability in the older adult. Many have proposed the use of electronic personal health record systems (PHRs) in the self-management process, but adoption remains low. The purpose of this research was to validate and test an explanatory model of the barriers and facilitators to older adults' adoption of personal health records for self-managing chronic illnesses. The long range goal of the research is to use the explanatory model to develop interventions that will maximize the facilitators and minimize the barriers to adoption. Methods: A preliminary attempt to capture the essential barriers and facilitators that predict adoption of PHRs among older adults with chronic illness was synthesized from the literature. In Phase One of the study, the model was integrated from existing literature and validated using a Delphi method. In Phase Two of the study, the model was pilot tested and refined for future investigations. Findings: The results of this study validated the Personal Health Records Adoption Model (PHRAM) and a preliminary instrument that measured barriers and facilitators to the adoption of PHRs in older adults who are self managing chronic illness. Additional findings indicate that while seniors are seeking options to manage their health and have expressed an interest in using Internet-based PHRs, they may require assistance to gain access to PHRs. Implications: The potential for PHRs to increase patient autonomy and reduce for disability and the resulting negative health consequences needs further investigation as we move into the next era of healthcare delivery. The results of this study provided the foundation for continued theoretically-based research in this area.

      Bergan, John R.; Lane, Suzanne; Bergan, John R.; Sabers, Darrell; Nicholson, Glen; Mishra, Shitala P. (The University of Arizona., 1986)
      The present study was a systematic investigation of hierarchical skill sequences in the beginning reading domain. The hierarchies included skills from the traditional approach to reading which reflect bottom-up processing and skills from the conceptual area of print awareness which reflect top-down processing. Researchers supporting the bottom-up approach view reading as a process in which the child extracts information from the text to gain knowledge of the print. The bottom-up processes examined were in the areas of letter recognition and letter naming, and identification of letter sounds and phonemes. The top-down processing approach views reading as a task in which the child brings his/her past experiences and knowledge about the world to gain information about print. The top-down processes examined were in the areas of print identification, inferring a word in context, and print directionality rules. Hierarchical skill sequences were developed within each of the specific areas reflecting the top-down and bottom-up processing theories. Items were developed to reflect the skill sequences based on the cognitive processes that are necessary for correct performance. This involved varying the task demands imposing various requirements of cognitive processing. The data were from 13,189 Head Start children ranging from 3 to 6 years of age. Latent trait models were constructed to reflect the viii ix hypothesized skill sequences by allowing the aj (discrimination) and bj (difficulty) parameters to be free to vary or by constraining them to be equal to other parameters. To arrive at a preferred model, each latent trait model that represented a hypothesized skill sequence was statistically compared against alternative latent trait models. The results from the present investigation supported the hierarchical skill sequences reflecting skills within the traditional area of reading. However, some of the skill sequences from the conceptual area of print awareness were not clearly supported. While the results provide a deeper understanding of beginning reading skill sequences reflecting top-down and bottom-up processing theories, future research is needed to delineate the specific skills which promote later reading ability once the child is in formal reading instruction.
    • Validating cognitive skill sequences in the early social development domain using path-referenced technology and latent trait models.

      Bergan, John R.; Feld, Jason Kane.; Morris, Richard J.; Mishra, Shitala P. (The University of Arizona., 1988)
      The present study was a systematic investigation of hierarchical skill sequences in the early social development domain. Recent research has suggested that social development may be conceptualized as a phenomena involving a hierarchical sequencing of competencies. In particular, social development may involve sequential changes in capability, reflecting successively higher levels of functioning within these competencies. The conceptual problem of this study focused on the construction and validation of a meaningful representation of ability in early social development. Ability was conceptualized as a composite of cognitive procedures governing the performance of specific tasks. The process for constructing skill sequences to reflect ability involved identifying task characteristics or demands which imposed various requirements on cognitive functioning. Hierarchical skill sequences were constructed to tap a variety of capabilities within the early social development domain. These skill sequences included understanding emotions, identifying and mediating needs, understanding friendships, and understanding fairness in decision making. Assessment items were developed to reflect each of these skill sequences based on the cognitive processes that are necessary for correct performance. This involved varying the task demands imposing various requirements on cognitive processing. The data were from 18,305 Head Start children ranging from 30 to 83 months of age. Latent trait models were constructed to reflect the hypothesized skill sequences by allowing the discrimination and difficulty parameters to be free to vary or by constraining them to be equal to other parameters. To arrive at a preferred model, each latent trait model was statistically compared against alternative latent trait models. In general, the results from the present investigation supported the hypothesis that the acquisition of social skills is a developmental phenomena involving a hierarchical sequencing of competencies. Moreover, the study supports the assumption that changes in capability can be defined by progress toward abstraction, complexity, stability, and the handling of increasing quantities of information. While the results provide a deeper understanding of early social development, future research is needed to extend the developmental structure to higher levels of ability. Moreover, research is needed to determine how the information gleaned from developmental assessment can be utilized in planning learning experiences to enhance development.
    • Validating hierarchical sequences in the design copying domain using latent trait models.

      Bergan, John R.; Burch, Melissa Price.; Mishra, Shitala P.; Obrzut, John E. (The University of Arizona., 1988)
      The present study was a systematic investigation of hierarchical skill sequences in the design copying domain. The factors associated with possible variations in task difficulty were delineated. Five hierarchies were developed to reflect variations in rule usage, the structuring of responses, presence of angles, spatial orientations, and stimulus complexity. Three-hundred thirty four subjects aged five through ten years were administered a 25 item design copying test. The data were analyzed using probabilistic models. Latent trait models were developed to test the hypothesized skill sequences. Each latent trait model was statistically compared to alternate models to arrive at a preferred model that would adequately represent the data. Results suggested that items with predictable difficulty levels can be developed in this domain based on an analysis of stimulus dimensions and the use of rules for task completion. The inclusion of visual cues to guide design copying assists accurate task completion. Implications of the current findings for facilitating the construction of tests which accurately provide information about children's skill levels were discussed. The presence of hierarchical skill sequences in a variety of ability domains was supported.
    • Validating the development of male and female preschoolers' help-seeking, goal-setting and planning, and self-evaluation using latent trait models.

      Reddy, Linda Ann.; Bergan, John R.; Mishra, Shitala P.; Feld, Jason K. (The University of Arizona., 1994)
      The present study investigated the early development of three self-regulated learning strategies--help seeking, goal setting and planning, and self evaluation for male and female preschoolers. Skill sequences were developed by identifying demand attributes that imposed requirements on cognitive functioning. The demand attributes of adult assistance and task complexity were identified for all three learning strategies. Variations in adult assistance and task complexity were examined to determine the relative difficulty for male and female preschoolers to perform skills within each learning strategy. This study included data from 10,291 preschoolers, age 2 to 6 years, from Head Start and public preschool programs across the country. The sample included approximately 5,000 males and 5,000 females from culturally diverse backgrounds. Children were assessed by their preschool teachers over two months with a standardized observational assessment instrument. A variety of latent trait models were used to test the developmental skill sequences of these learning strategies in relation to gender. Results revealed that variations in adult assistance and task complexity were related to the relative difficulty in performing these learning strategies. These findings support the notion that adult assistance can enhance the development of preschooler's self-regulated learning strategies. In particular, adult assistance promotes preschoolers' skills to perform simple functions independently and complex functions (e.g., advance planning or checking in parts) with adult help. Gender differences were found in preschoolers' difficulties in self-evaluating and seeking help. For example, females had more difficulty than males checking completed work with adult help and checking an activity in parts with adult help. Males had less difficulty checking a completed activity independently than females. Results also suggested that males are more sensitive to the presence of adult assistance when performing complex checking (i.e., checking in parts) than females. In addition, females were found to be more skilled than males in seeking assistance from adults in the classroom. No gender differences were found in goal setting and planning. The results from this study support the importance of social influences on preschoolers' development of self-regulated learning strategies. Future research directions and implications were also addressed.
    • Validation of a Mass Casualty Model

      Effken, Judith; Culley, Joan Marie; Effken, Judith; Verran, Joyce (The University of Arizona., 2007)
      There is a paucity of literature evaluating mass casualty systems and no clear 'gold standard' for measuring the efficacy of information decision support systems or triage systems that can be used in mass casualty events. The purpose of this research was the preliminary validation of a comprehensive conceptual model for a mass casualty continuum of care. This research examined key relationships among entities/factors needed to provide real-time visibility of data that track patients, personnel, resources and potential hazards that influence outcomes of care during mass casualty events.A modified Delphi technique was used to validate the proposed model using a panel of experts. The four research questions measured the extent to which experts agreed that the: 1) ten constructs represent appropriate predictors of outcomes of care during mass casualty events; 2) proposed relationships among the constructs provide valid representations of mass casualty triage; 3) proposed indicators for each construct represent appropriate measurements for the constructs; and 4) the proposed model is seen as useful to the further study of information and technology requirements during mass casualty events. The usefulness of the online Delphi process was also evaluated.A purposeful sample of 18 experts who work in the field of emergency preparedness/response was selected from across the United States. Computer, Internet and email applications were used to facilitate a modified Delphi technique through which experts provided initial validation for the proposed conceptual model. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships and indicators in the model. Experts viewed the proposed model as relatively useful (Mean = 5.3 on a 7-point scale). Experts rated the online Delphi process favorably.Constructs, relationships and indicators presented in this model are viewed as preliminary. Future research is needed to develop the tools to measure the constructs and then test the model as a framework for studying effects and outcomes of mass casualty events. This study provides a foundation for understanding the complex context in which mass casualty events take place and the factors that influence outcomes of care.

      Van Allen, Roach; Shaeffer, Ann Marilyn Rufer (The University of Arizona., 1981)
      This study was designed to develop and validate an assessment instrument which would yield valid information on teachers' theoretical learning philosophy orientation and instructional behaviors in the teaching of writing. Data are analyzed to determine whether there is a relationship between stated learning theories and responses to statements of elements of a writing program. The subjects who participated in the study were graduate students at Oakland University, Rochester, Michigan, and The University of Arizona, Tucson, Arizona, and experts in the field of writing or language arts who were certified according to stated criteria. Respondents completed the View Toward Learning sheet and the Shaeffer Inventory of Approaches to the Teaching of Writing. The information from each completed Inventory and Learning View sheet was recorded for analysis data to accept or reject ten hypotheses. The Inventory achieved content validity through individual item documentation in literature sources. The instructional approaches were interpreted according to three common learning theories: Behavioristic, Nativistic and Cognitive Field. The data analysis, which included t tests, Cronback Alphas, and item correlations and classification, established the instrument as valid in distinguishing a teacher's approach as Behavioristic or Nativistic and reliably aligned learning theory with classroom practices. It was not valid in differentiating the Nativist from the Cognivist. Recommendations include a revision of selected Nativistic and Cognitive Field items to achieve a clearer distinction between the two approaches, and the use of the instrument and cover sheet in a large scale study to further document validity and reliability. The Inventory may be utilized for teacher self-appraisal individually, in staff development projects, or in combination with classroom observation. Teacher education programs concerned with writing instruction could assess beliefs about the way children learn and related classroom practices.
    • Validation of bioimpedance spectroscopy to assess acute changes in hydration status

      Howell, Wanda H.; Higgins, Karen J. (The University of Arizona., 2004)
      In this study bioimpedance spectroscopy (BIS) was validated as a field method for measuring short-term, small changes in hydration status by comparing extracellular water change (ΔECW) estimated by BIS with a criterion method (bromide dilution), dual energy X-ray absorptiometry (DXA) and body weight (BW). A secondary aim was to compare BW to bromide dilution as a method for estimating acute ΔECW. Finally, BIS was compared to DXA and single frequency bioimpedance analysis (SF-BIA) instruments to assess acute hydration effects on body composition estimates. During dehydration, no significant differences were found between bromide and BIS measures of ΔECW. The ΔECW measured by DXA (DXA-ΔECW) and BW (BW-ΔECW) was significantly different from bromide-estimated ΔECW (Br-ΔECW), but not from BIS estimates (BIS-ΔECW). During rehydration, there were no significant differences between Br-ΔECW and the other methods. When using BW as the reference, results were more consistent in that BW-ΔECW was significantly correlated with both BIS-ΔECW and DXA-ΔECW regardless of hydration status. These findings suggest that bromide may not be an appropriate criterion method for estimating short-term changes in hydration status. Regardless of hydration status, BIS provided accurate measures of fat-free mass (BIS-FFM) and fat mass (BIS-FM) that were comparable to, or better than, estimates by SF-BIA. At baseline and after dehydration BIS-FFM had the highest correlation with DXA estimates (DXA-FFM), although two SF-BIA instruments (Bio-Resistance Body Composition Analyzer from Valhalla Scientific and The Body Comp Scale from American Weights & Measures) produced good estimates of FFM. Rehydration appeared to affect the accuracy of FFM measurements by BIS and SF-BIA as evidenced by lower, more moderate correlations to DXA-FFM. Phase-dependent effects on percentage body fat (%BF) estimates were also apparent. In contrast, all methods performed reasonably well for estimates of FM, regardless of hydration status. In summary, BIS provides accurate estimates of ΔECW compared to either bromide dilution or BW, especially in the direction of dehydration. BIS also provides accurate estimates of FFM and FM regardless of hydration status. Further study of bromide dilution as a criterion measure is needed to validate its use in measuring ΔECW during acute shifts in hydration.
    • Validation of Hearing Aid Fittings by the Arizona Sonora Borders (ARSOBO) Projects for Inclusion

      Dean, James; Beukelman, Page Naomi; Velenovsky, David; Marrone, Nicole; Griffin, Stephanie (The University of Arizona., 2018)
      The Arizona Sonora Border (ARSOBO) Projects for Inclusion Hearing Healthcare clinic provides comprehensive audiologic evaluations and low-cost hearing aids to individuals in Nogales, Sonora, Mexico. After identifying the need for a fitting guide to properly adjust the hearing aids, we collected 110 patient audiograms and grouped the six most common configurations of hearing loss. Using simulated real-ear measures, we fit the hearing aids to each of the six common configurations of hearing loss, and recorded the appropriate settings to serve as a starting point for future hearing aid fittings. In an effort to determine the success of these hearing aid fittings (and others performed by the ARSOBO Hearing Healthcare program), we administered 29 questionnaires assessing hearing aid effectiveness. The International Outcome Inventory for Hearing Aids (IOI-HA) was the chosen outcome measure due to its international applicability and quick and simple format. In general, the hearing health care provided by ARSOBO yielded positive outcomes and favorable outcomes in each category of the IOI-HA. Potential confounding variables, limitations and future directions of the program are outlined. Additionally, the specific results of these outcome measures and the implications of our project/fitting guide on humanitarian audiology is discussed.
    • Validation of Reconstructed Program Theory

      Taren, Douglas L.; Foltysova, Jirina; Taren, Douglas L.; Garcia, Francisco A.; Tidd, John M.; Cutshaw, Christina A.; Ehiri, John E. (The University of Arizona., 2013)
      Background: The focus of this dissertation is on methods associated with evaluating a program's merit and worth. There are many approaches documented in the literature for evaluating merit and worth. The focus is only on theory driven evaluation (TDE). The premise of TDE is the program theory (PT) must be understood before being able to evaluate the merit and worth of a program. One of the early limitations in the TDE literature was a lack of methods for deriving PT. Renger has recently published methodology describing how existing source documentation could be to develop a program theory. A key component of Renger's methodology is the validation of the PT. Renger suggested using subject matter experts (SME) and program staff to validate the PT. However, it is uncertain whether relying on SMEs to validate a PT is sufficient. Objectives/Methods: Thus the current work focuses on whether there is empirical (i.e., research) and/or statistical (i.e., correlation analyses) support for a PT generated by SMEs. Results: Findings of the correlation analysis provide some evidence of the effectiveness of SMEs validation process. Specifically, weak or very weak statistical support was found for 56.25% (N=9) of relationships between mechanisms of change depicted in the model from Aim 5 (N=16). The results of targeted literature review indicate a strong relationship between the PT generated by SMEs and targeted literature search. Specifically, research evidence was found for 13 (81%) relationships between mechanisms of change identified in the model from Aim 5. Conclusion: PT can be reconstructed from source documentation. Reconstructed PT should be validated. Validation by SME appears to be time a fast, cost-effective way of getting feedback on the initial draft of PT. However, due to the limited scope of targeted literature search and correlational analysis, it is not possible to conclusively determine whether relying on subject matter experts is sufficient to validate reconstructed Program Theory. More research on TDE validation methods is needed.
    • Validation of the modified Basic Life Skills Screening Inventory.

      Tucker, Inez; Sales, Amos; Brown, Ronald Hunter.; Johnson, Bob (The University of Arizona., 1988)
      Rehabilitation and education are faced with the growing need for adequate and appropriate assessment tools for over 9,000 congenitally deaf-blind persons in this country. These tools are needed to help form the basis for evaluation of these clients/students so that programs appropriate to their specific needs can be determined. In the past, assessment of the functional development of this population has been based on tests standardized on populations of non-handicapped individuals. These measuring primarily language abilities, and experiential factors. Observational procedures can examine the spontaneous behavior of subjects over a long period of time. This is an alternative to standardized instruments. One of these in current use is the Basic Life Skills Screening Inventory. This instrument was developed in 1982 for the purpose of assisting educators and counselors in establishing the readiness of deaf-blind, developmentally disabled clients/students for vocational and life skills training. Though useful in its original form, this instrument has two major limitations. One is the fact that the rater is given only limited choices, resulting in a ceiling effect and a pronounced skew of many of its scales. Another limitation is its lengthy 283 item format, requiring too much administration time to be practical on a daily basis. The present study focused on making needed modifications in this instrument that would help alleviate these limitations, and continue to maintain high psychometric properties within the instrument. In doing this, rater choices were expanded from three (3) to five (5) column headings, and the instrument was reduced from 283 items to 145 items. This study was designed to answer the following questions: (1) Can the Basic Life Skills Screening Inventory be modified in such a way as to give the rater a greater response choice, thus allowing for a more refined assessment? (2) Can the 283 item, Basic Life Skills Screening Inventory be shortened by approximately 50%, to allow for an easier and more practical administration, and continue to maintain high psychometric properties? Results indicate that, despite the modifications, a very high overall consistency among the items was maintained with a total average alpha of 9935.5.

      Hawes, E. Clair Ladner; Erickson, Richard L.; Newlon, Betty J.; Wrenn, Robert L.; Arkowitz, Harold S. (The University of Arizona., 1984)
      The purpose of this study was to validate the marriage enrichment program, "Couples Growing Together." This is an Adlerian based program which exists in two formats; the Short Course is a one day, eight hour program and the Long Course is an eight week, sixteen hour program. Twenty-four couples from the Tucson East Stake of The Church of Jesus Christ of Latter-day Saints volunteered for the program. The criteria for couples' enrolment was that they had been married for a minimum of three years, did not consider that they had major problems in their relationship, had not been involved in a couple enrichment program in the previous year, and were not currently involved in marriage counseling. Although random assignment was not possible, later statistical analysis revealed negligible differences between the two groups. Eight couples were in the Long Course, and fourteen couples provided the wait list control group, then later received treatment as the Short Course. All participants were administered Bienvenu's Marital Communication Inventory and Spanier's Dyadic Adjustment Scale at pretest, one week posttest, and 15 week follow-up testing. In addition there was a sessional, posttest, and follow-up evaluation of each of the components of the program, plus a subjective assessment of the contribution "Couples Growing Together" makes to the relationship. The results indicated that the Long Course is significantly more effective than the Short Course when evaluated on factors of communication and relationship satisfaction. Moreover, these effects were not transitory as evidenced by the maintenance of gains over a 15 week period. Although some improvements in communication were shown for the Short Course, these gains were not statistically significant. A number of implications for future research of this program were presented as a result of the study. It would be advantageous to use this initial study as a basis for a more extensive evaluation of its different components and toward the development of an increasingly more adequate and powerful preventative and possibly intervening strategy for marital living.

      Iannarone, Antonio Thomas, 1945- (The University of Arizona., 1976)
    • Validity and item bias of the WISC-III with deaf children.

      Maller, Susan Joyce.; Sabers, Darrell; Sechrest, Lee; Bergan, John (The University of Arizona., 1994)
      The Wechsler Intelligence Scale for Children-Third Edition (WISC-III) is likely to become the most widely used test of intelligence with deaf children, based on the popularity of the previous versions of the test. Because the test was constructed for hearing children who use spoken English, the following major research questions were asked: (a) Does the WISC-III demonstrate adequate construct validity? and (b) Do specific items exhibit differential item functioning (DIF), and does the nature of the content of each item that exhibits DIF imply that the item is biased? The test was translated into sign language and administered to a total of 110 deaf children at three different sites. The deaf children ranged from ages 8 through 16 (M = 13.25, SD = 2.37), had hearing losses identified as severe or worse, were prelingually deaf, used sign language as their primary means of communication, and were not identified as having any additional handicapping conditions. The sample of deaf children was compared to a sample of 110 hearing children similar in age and Performance IQ. Construct validity was examined using a LISREL multi-sample covariance structure analysis. The covariance structures were different (χ ² (91) = 119.42, p =.024). A Rasch Model was used to detect DIF on the following subtests: Picture Completion, Information, Similarities, Arithmetic, Vocabulary, Comprehension. All of these subtests exhibited DIF, and DIF plus the differences in mean logit ability resulted in numerous items that were more difficult for deaf children on the above Verbal subtests. Item bias was judged by examining the contents of items that exhibited DIF. Items were biased due generally to translation issues and differences in the educational curricula. Thus, deaf children are at a distinct disadvantage when taking these WISC-III subtests. Practitioners are urged to consider these findings when assessing deaf children.
    • The validity of computer-mediated communicative language tests

      Schulz, Renate; Heather, Julian C. (The University of Arizona., 2003)
      A recent innovation in language testing involves the use of computer-mediated communicative language tests i.e., assessment of individuals' second language ability from transcripts of their interactions via computer-mediated communication (CMC). Studies have shown that such interactions in the first language involve a hybrid discourse with features of both written and spoken language, which suggests the possibility of making inferences about oral language ability from performance in a CMC environment. The literature to date offers little guidance on this matter. Research on computer-mediated communication has focused on its use in the second language classroom rather than in a testing context while studies of the linguistic and interactional features of second language learners' CMC discourse have mostly been descriptive with little direct comparison of CMC and face-to-face discourse. This study, therefore, examines the validity of making inferences from computer-mediated discourse to oral discourse through a comparison of the performance of 24 third-semester French students on two tests: a computer-mediated communicative French test; and its nearest equivalent format in face-to-face testing, the group oral exam. Using a within-subjects design, counterbalanced for testing condition and discussion topic, the present study focuses on five areas which have important implications for validity: (a) the predictability of ratings of pronunciation on the group oral test; (b) the similarity of scores achieved on the CMC and group oral tests; the presence of similar (c) linguistic and (d) interactional features in the discourse of both tests; and (e) students' attitudes to the two tests. Results show that although scores on the two tests showed no statistically significant difference, students' discourse differed in many respects which would, thus, invalidate any inferences made about oral ability from computer-mediated performance. Moreover, this study raises an important question about the role of computer-mediated communication in promoting second language acquisition since the computer-mediated discourse contained fewer examples of the negotiation of meaning routines that interactionist theories hold to be important to language acquisition.