L1/L2 Eye Movement Reading of Closed Captioning: A Multimodal Analysis of Multimodal Use
AdvisorWaugh, Linda R.
Bever, Thomas G.
Goodman, Yetta M.
Committee ChairWaugh, Linda R.
Bever, Thomas G.
MetadataShow full item record
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractLearning in a multimodal environment entails the presentation of information in a combination of more than one mode (i.e. written words, illustrations, and sound). Past research regarding the benefits of multimodal presentation of information includes both school age children and adult learners (e.g. Koolstra, van der Voort & d'Ydewalle, 1999; Neumen & Koskinen, 1992), as well as both native and non-native language learners (e.g. d'Ydewalle & Gielen, 1992; Kothari et al, 2002). This dissertation focuses how the combination of various modalities are used by learners of differing proficiencies in English to gain better comprehension (cf. Mayer, 1997, 2005; Graber, 1990; Slykhuis et al, 2005). The addition of the written mode (closed captioning) to the already multimodal environment that exists in film and video presentations is analyzed. A Multimodal Multimedia Communicative Event is used to situate the language learner. Research questions focus on the eye movements of the participants as they read moving text both with and without the audio and video modes of information. Small case studies also give a context to four participants by bringing their individual backgrounds and observations to bear on the use of multimodal texts as language learning tools in a second or foreign language learning environment. It was found that Non Native English Speakers (NNS) (L1 Arabic) show longer eye movement patterns in reading dynamic text (closed captioning), echoing past research with static texts while Native Speakers of English (NS) tend to have quicker eye movements. In a multimodal environment the two groups also differed: NNS looked longer at the closed captioning and NS were able to navigate the text presentation quickly. While associative activation (Paivio, 2007) between the audio and print modalities was not found to alter the eye movement patterns of the NNS, participants did alternate between the modalities in search of supplementary information. Other research using additional closed captioning and subtitling have shown that viewing a video program with written text added turns the activity into a reading activity (Jensema, 2000; d'Ydewalle, 1987). The current study found this to be the case, but the results differed in regard to proficiency and strategy.
Degree ProgramSecond Language Acquisition & Teaching