Coyote Papers: Volume 12 (2001)
ABOUT THE COLLECTION
Coyote Papers: Working Papers in Linguistics is a publication of the Linguistics Circle, the Graduate Student Organization of the Department of Linguistics at the University of Arizona.
Volume 12: Coyote Papers: Working Papers in Linguistics, Language in Cognitive Science. (2001) Edited by Rachel L. Hayes, William D. Lewis, Erin L. O'Bryan, & Tania S. Zamuner.
For more information, visit the Coyote Papers website.
Contact Coyote Papers at firstname.lastname@example.org.
Organizing linguistic data: thematic introducers as an exampleIn this paper I propose to model specific French linguistic markers, thematic introducers (e.g. au sujet de, à propos de, en ce qui concerne, concernant, etc.) in the ContextO platform developed by the LaLic team in the Université de Paris IV. I use the software to locate thematic structures in texts. The software uses a linguistic database in order to trace the relevant linguistic information that the user is looking at. The ultimate aim is to create a database that matches the linguistic representation in order to create a linguist-friendly tool. I review several studies that propose a classification containing thematic introducers and then explain how I have proceeded to propose a customized distribution of the thematic introducers to meet the constraints of the system.
The perception of novel phoneme contrasts in a second language: a developmental study of native speakers of English learning Japanese singleton and geminate consonant contrastsThis work explores development in the perception of Japanese singleton and geminate consonant contrasts among native speakers of English learning Japanese as a second language. The primary goal of this paper is to show that the second language (L2) acquisition of phoneme contrasts that are not present in the first language (L1) exhibits development that is predictable from the acoustic properties of the contrast. Additionally I attribute differences in the perception of particular singleton/geminate contrasts by both native speakers of Japanese and learners of Japanese as a result of acoustic properties of the contrasts.
Syntax in performance: minimalist derivation in the late assignment of syntax theoryThis paper presents an account of how Minimalist derivation (Chomsky 1995) can be embedded in a comprehension model, the Late Assignment of Syntax Theory (LAST) (Townsend & Bever, 2001). The issues addressed concern the interface between the first step of the model, in which heuristic strategies apply to the utterance, and the second step, Minimalist derivation. Two questions about the interface are addressed: 1) How are features in the numeration needed to begin a Minimalist derivation chosen? 2) What dictates which units Merge in the derivation? Chomsky (1995:226-227) claims that we do not need to ask either question. I review his reasons and argue that we can and should answer these questions in a workable comprehension model. In response to the first question, I demonstrate that heuristic strategies applied to the utterance determine which features enter the numeration. In response to the second question, I discuss how heuristic strategies combined with lexical information determine which items Merge.
Measuring conceptual distance using WordNet: the design of a metric for measuring semantic similarityThis paper describes the development of a metric for measuring the semantic distance or similarity of words using the WordNet lexical database. Such a metric could be of use in development of search engines and text retrieval systems, tasks for which the richness of natural language can cause difficulty. Further, such a metric can prove invaluable to psycholinguists who wish to study lexical semantic similarity or speech errors (specifically malapropisms). The paper first explores an adjusted distance metric, a la Rada et al. 1989, and the problems such a metric presents. Additional analysis shows that adjustments can be made to such a distance metric using density calculations, both based on depth within the network and based on local density. The paper ends with a discussion about automating the task of identifying regions within the semantic space over which density calculations can be made.
Modeling semantic coherence from corpus data: the fact and the frequency of a co-occurrenceThe paper presents a preliminary evaluation of a corpus-based representation of individual words and a method to generalize over these representations. The vector space is represented in a way that gives weight to the fact that words co-occur rather than to the frequency of their co-occurrence. This format is hypothesized to allow for reducing the vector space, minimizing negative effects of data sparseness and enhancing ability of the model to generalize words to novel contexts. The model is assessed by comparing computer-calculated probabilities of different verb-argument combinations with human subjects' judgements about appropriateness of these combinations. The results indicate that there is a correlation between the probabilities calculated by the model and the subjects' evaluations.