• Against [lateral]: Evidence from Chinese Sign Language and American Sign Language

      Ann, Jean; Myers, James; Pérez, Patricia E.; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1990)
      American Sign Language (ASL) signs are claimed to be composed of four parameters: handshape, location, movement (Sto]çoe 1960) and palm orientation (Battison 1974). This paper focuses solely on handshape, that is, the configuration of the thumb and the fingers in a given sign. Handshape is significant in ASL and Chinese Sign Language (CSL); that is, minimal pairs exist for handshape in each. Thus, the two ASL signs in (1) differ in one parameter: the handshapes are different, but the location, palm orientation and movement are the same. Similarly, the two CSL signs in (2) differ in one parameter: handshape. A logical next question asks if handshapes are further divisible into parts; more specifically, are handshapes composed of distinctive features? This question is not new; in fact, researchers have made many proposals for ASL handshape features (Lane, Boyes -Braem and Bellugi, 1979; Mandel, 1981; Liddell and Johnson, 1985; Sandler, 1989; Corina and Sagey, 1988 and others). This paper focuses on the proposal of Corina and Sagey (1988). In Section 2, I outline the proposed system for the distinctive handshapes of ASL, of which [lateral] is a part. Then using data from ASL and CSL, I give three arguments in support of the claim that there is not sufficient justification in ASL or CSL for the feature [lateral]. First, I show in Section 3 that the prediction which follows from the claim that [lateral] applies only to the thumb, namely that the thumb behaves differently from the other fingers, is not borne out by CSL data. Second, I argue in Section 4 that since other features (proposed by Corina and Sagey, 1988) can derive the same phonetic effects as [lateral], [lateral] is unnecessary to describe thumb features in either ASL or CSL. Third, in Section 5, I use ASL and CSL data to argue that the notion of fingers as "specified" or "unspecified ", although intuitively pleasing, should be discarded. If this notion cannot be used, the feature [lateral] does not uniquely identify a particular set of handshapes. I show that CSL data suggests that two other features, [contact to palm] and [contact to thumb] are independently needed. With these two features, and the exclusion of [lateral], the handshapes of both ASL and CSL can be explained. In Section 6, the arguments against [lateral] are summarized.
    • Binarity and Ternarity in Alutiiq

      Hewitt, Mark S.; Ann, Jean; Yoshimura, Kyoko; Brandeis University (Department of Linguistics, University of Arizona (Tucson, AZ), 1991)
      One of the pillars of phonological research has been the desirability of representing phonological processes as being local in application. Locality, as a principle of the grammar, constrains the relation between the trigger and target elements of a phonological process to one of adjacency. Adjacency, within the framework of Autosegmental Phonology and Underspecification theory, consists of two varieties: tier adjacency and structural adjacency (Myers (1987)). Tier adjacency examines linear relations among elements within an isolated tier of the representation (e.g. the tonal tier), while structural adjacency examines these relations mediated through the skeletal core, which organizes and maintains the linear relations between phonemes and their constituent elements. Locality and Adjacency are not, simply the preserve of featural relations and their skeletal core. The core itself, whether viewed as C/V slots, X/X' timing slots, or Root nodes, is organized into the grander structures of the Prosodic Hierarchy (e.g. syllable, Foot, etc.) . The formation of these units is a phonological process and as such subject to the same principles. A portion of the on -going debates in metrical theory has focused on whether metrical structure, in particular Foot structure, is limited to binary constituents. Kager (1989) proposes an extreme Binarism, with all metrical structure initially being limited to binarity. Hayes (1987) and Prince (1990) only commit to a strong preference for binary Feet. Halle & Vergnaud (1987) propose a system allowing binary, ternary, and unbounded Feet. The principle of Locality with its requirement of adjacency argues for a binary -view of metrical structure where the trigger and target of the structure building process are un- metrified elements. The most serious challenge to this view is the existence of languages which employ ternary constituents, e.g. Cayuvava, Chugach Alutiiq. These languages have been cited as evidence in arguing for a theory capable of generating ternary Feet. In a framework designed to maintain strict locality surface ternary constituents must be derived from underlying binary structures. This paper proposes a solution to this problem which relies on the ternary constituent being a complex constituent composed of a binary Foot grouped with an adjacent syllable. This constituent is not a Foot, but rather a Prosodic Word. Generating an iterative ternary Prosodic Word requires a new algorithm for building metrical structure. This algorithm builds metrical constituents in an opportunistic manner. Opportunistic building creates metrical constituents as soon as possible, instead of applying one particular structure building rule across the whole string before the next rule applies. This paper examines these issues through the metrical structures of the Alutiiq dialects described by Leer (1985a). The rich and detailed work of Leer serves admirably as a base for elucidating the issues of ternarity. Unfortunately, the ramifications of these proposals beyond the issue of ternarity can only be briefly alluded to in this paper. Length constraints do not permit me to present all aspects of these proposals in the full detail they require for their justification.
    • Causative Formation in Kammu: Prespecified Features and Single Consonant Reduplication

      Takeda, Kazue; Maye, Jessica; Miyashita, Mizuki; University of California, Irvine (Department of Linguistics, University of Arizona (Tucson, AZ), 1998)
    • Coda Neutralization: Against Purely Phonetic Constraints

      Heiberg, Andrea; Suzuki, Keiichiro; Elzinga, Dirk; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1995)
      The neutralization of the laryngeal features of a consonant that is not directly followed by a vowel is a common process cross -linguistically. Laryngeal neutralization in this position has a clear phonetic cause: laryngeal features are not salient unless they are immediately followed by a vowel. Since laryngeal neutralization has a phonetic cause, it seems reasonable to characterize it directly in phonetic terms, without positing any additional layer of phonological abstraction. However, a phonetic explanation is not sufficient to account for all cases of laryngeal neutralization. For example, in Korean, laryngeal neutralization occurs in a nonneutralizing phonetic environment; in Nisgha, laryngeal neutralization occurs only in the reduplicant, although the phonetic environment for neutralization is found in both the reduplicant and the base. Although phonetics is the major factor leading to the development of these types of restrictions on laryngeal features, I argue that a phonetic account is not adequate for all such restrictions. Abstract phonological constraints and representations are necessary. Hence, two types of neutralization are possible: (i) phonetic neutralization, which results directly from the lack of saliency of cues and occurs in every instance of the neutralizing environment; and (ii) abstract phonological neutralization, which may occur where the neutralizing environment is absent (as will be demonstrated for Korean), and may fail to occur in every instance of the neutralizing environment (as will be demonstrated for Nisgha).
    • The Consequence of Rule Ordering in Haya Tonology

      Hyman, Larry; Fulmer, S. Lee; Ishihara, Masahide; Wiswall, Wendy; University of California, Berkeley (Department of Linguistics, University of Arizona (Tucson, AZ), 1989)
      In the 1970's a major debate took place on the question of rule ordering in phonology. One group argued that the specific ordering of phonological rules, if needed at all, was always intrinsic, being predictable on the basis of universal principles. The second group, following in the tradition of Chomsky and Halle and the SOUND PATTERN OF ENGLISH, responded that these principles did not work, and that rule ordering is extrinsic, having to be stipulated in the phonologies of a number of languages. In the course of this debate, the proponents of extrinsic rule ordering sometimes argued that the analyses forced by the universal, intrinsic approach lacked insight, missed generalizations or simply did not work. Curiously, although positions were taken against extrinsic rule ordering and in favor of either simultaneous or random sequential ordering, no one to my knowledge argued in parallel fashion that the extrinsic approach lacked insight, missed generalizations, or simply did not work. In this paper I would like to present one such possible case. I shall attempt to demonstrate that in the lexical tonology of Haya, an Eastern Bantu language spoken in Tanzania, extrinsic rule ordering simply gets in the way. In section 1 I present the relevant tonal data, showing that a classical autosegmental analysis utilizing extrinsic rule ordering runs into serious problems. After showing, in section 2, that various alternative solutions involving rule ordering still fail to overcome these problems, I then consider in section 3 two possible analyses: one with simultaneous application of the three lexical tone rules in question, the other exploiting morphemic planes. I will conclude that this may be one language where simultaneous rule application is warranted. The data come from the lexical tonology of Haya, a subject that was covered in some detail in Hyman and Byarushengo (1984). For reasons of simplicity, I shall present only the underlying and lexical representations of Haya verb forms. It should be borne in mind that the forms cited in this study are subject to subsequent postlexical tone rules that are described in the Hyman and Byarushengo paper.
    • Deriving Abstract Representations Directly from the Level of Connected Speech

      Bourgeois, Thomas C.; Myers, James; Pérez, Patricia E.; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1990)
    • Diphthongization and Coindexing

      Hayes, Bruce; Crowhurst, Megan; University of California, Los Angeles (Department of Linguistics, University of Arizona (Tucson, AZ), 1988)
      The tree model of segment structure proposed by Clements (1985) is an important innovation in phonological theory, making possible a number of interesting and arguably correct predictions about the form of assimilation rules, locality of rule application, and the organization of the distinctive feature system. Clements's proposal has given rise to an expanding literature, including Sagey (1986), Schein and Steriade (1986), Archangeli and Pulleyblank (forthcoming), and McCarthy (forthcoming). In this paper, I argue that the tree model as it stands faces a serious empirical shortcoming: it fails to provide an adequate account of diphthongization rules, here defined as rules that convert a segment (vowel or consonant) into a heterogeneous sequence. I propose a revised tree model, which for clarity and explicitness uses coindexation rather than association lines to indicate temporal association. I argue that my proposal solves the diphthongization problem, and that it also makes it possible to restrict the power of segment structure theory in the following way: the "feature- bearing units" (Clements 1980) for any feature are always elements of the prosodic tier, and not nodes in the segment tree.
    • Double-sided Effect in OT: Sequential Grounding and Local Conjunction

      Suzuki, Keiichiro; Suzuki, Keiichiro; Elzinga, Dirk; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1995)
      In a standard SPE-style rewrite rule scheme, the positioning of the environmental dash ("__") directly expresses both adjacency and linear precedence relations between the focus and the determinant. For example, all of the three rules in (1) involve A-to-B alternation, but differ with each other in the focus (A) - determinant (X, Y) relation: in (1a), A becomes B when preceded by X; in (1b), A becomes B when followed by Y; and in (1c), A becomes B when double -sided (preceded by X and followed by Y). (1) a. A → B / X __ b. A → B / __ Y c. A → B / X __ Y Thus, in this model, both adjacency and linear precedence relations are treated as properties of a rule. This view has been carried over to subsequent work in some guise or other (see, e.g. Howard 1972, Cho 1991, Archangeli and Pulleyblank (A&P) 1994). The question to be addressed here is how these various focus -determinant relations are expressed if there are no rules (see McCarthy 1995b for a recent treatment of this issue). In this paper, I would like to consider this question from the perspective of Optimality Theory (henceforth OT) (Prince and Smolensky 1993, McCarthy and Prince (M&P) 1993). Specifically, I consider the three types of focus-determinant relations seen in (1) with respect to the phenomenon of vowel raising. We find that the variation of vowel raising among Basque, Old High German, and Woleaian parallels the variation illustrated in (1): in many dialects of Basque, a low vowel raises to a mid vowel when preceded by a high vowel (de Rijk 1970, Hualde 1991) ( =1a); in Old High German, a low vowel raises to a mid vowel when followed by a high vowel (Voyles 1992) ( =1b); and in Woleaian (spoken in Woleai Island of Micronesia), a low vowel raises to a mid vowel when double-sided by high vowels (Howard 1972, Sohn 1975, Poser 1982) ( =1c). I argue that all of these cases are accounted for by allowing constraints to make reference to the adjacency and linear precedence information. Formally, I propose the following two notions: Sequential Grounding (Smolensky 1993), a syntagmatic extension of Grounded Conditions (A &P 1994), and Local Conjunction (Smolensky 1993, 1995), a UG-operation which conjoins two constraints (details of these notions are explained in section 2.2.2.). This paper is organized as follows. Section 2 provides data and an analysis of the double -sided raising in Woleaian, introducing Sequential Grounding (Smolensky 1993) and Local Conjunction (Smolensky 1993, 1995). I show that Local Conjunction of two Sequential Grounding constraints accounts for the fact that one adjacent high vowel on either side is not sufficient to trigger the raising, but there must be a high vowel on each side. Section 3 gives brief analyses of Basque and Old High German. I demonstrate that reranking of the constraints proposed for the double -sided raising in Woleaian accounts for the other cases of raising (Basque and Old High German). Finally in section 4, the summary of the analyses and conclusion are provided.
    • Floating Accent in Mayo

      Hagberg, Larry; Fulmer, S. Lee; Ishihara, Masahide; Wiswall, Wendy; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1989)
      A major claim of this paper is that the distinctive features of lexical accent are formally identical to those of tone, or at least to a subset of tonal features. The terms accent and tone have been used in many different ways in the literature, but throughout this paper I will use both terms to refer only to lexical features that surface as contrastive pitch, length, volume and/or other features of prominence. By lexical I mean features those phonetic realization cannot be predicted by any regular metrical structure or phonological rule. I am assuming that the placement of stress is always determined by a set of language- particular (but parameter - based) rules which build metrical structure, with the location of exceptional stress indicated by a lexical diacritic called accent. Examples of such systems of rules are described in Hayes (1982), Hammond (1986) and Halle and Vergnaud (1987 a and b). Although metrical structure has generally been associated with non-tonal languages, there are also some tonal languages which exhibit the presence of metrical structure. Examples of such languages include Creek (Haas (1977)), Malayalam (Mohanan (1982)) and Capella Trique (Hollenbach (1988)). Thus the presence of metrical structure is not sufficient in itself to distinguish a non-tonal language from a tonal language. What, then, distinguishes these two categories from one another? There are two general distinctions which have traditionally been made in classifying languages as tonal versus non-tonal. One distinction is that many tonal languages exhibit a variety of lexically contrastive tones, while mast, if not all, of the degrees of stress in a non-tonal language can usually be explained using only one kind of lexical accent. Thus, tonal languages can have more than one kind of lexical tone, whereas non -tonal languages can have lexical accent but not tone, and there is apparently only one kind of lexical accent. I will discuss this apparent asymmetry in section three. The other distinction between tonal and non-tonal, for which I present counterevidence in this paper, is that autosegmental status has been attributed to tone, but not to accent, in a number of languages; see, for example, Goldsmith (1976), Williams (1976) and Pulleyblank (1983). For all such languages, the Universal Association Convention (UAC) (Goldsmith (1976)) predicts the location of most tones, with the remaining tones accounted for by lexical pre- linking. From an examination of the literature it appears, then, that the main distinction between the terms tonal and non-tonal is that tonal languages have lexical tone while non-tonal languages have lexical accent. Formally, both of these devices are lexical diacritics, but they appear to differ in that tone can be an autosegment, while no such status has ever been claimed for accent. Therefore, the question to be addressed in this paper is this: Can an accentual diacritic have autosegmental status? Using data from Mayo, a Uto-Aztecan language of northwestern Mexico, I will show that the answer is yes. The implication, then, is that accent is formally the same as tone, or at least the same as one variety of tone. A significant claim follows from this. If accent is formally the same as a tone, then no language can exist in which lexical accent occurs independently of all tonal features. As far as I know, no such language has been shown to exist. The paper is organized as follows. Section one presents the data and provides two possible analyses of Mayo stress using the theory of Halle and Vergnaud (1987 a and b) (henceforth H&V). I show that Mayo has lexical accent which floats in underlying representation (UR), just like an autosegmental tone. Section two demonstrates that stress assignment crucially has to precede and follow reduplication, thus indicating that the rules of stress assignment are cyclic and that lexical accent refloats at the end of each cycle. In section three I explore the theoretical implications of this analysis and propose that accent is formally the same as tone.
    • Floating H (and L*) Tones in Ancient Greek

      Golston, Chris; Myers, James; Pérez, Patricia E.; University of California, Los Angeles (Department of Linguistics, University of Arizona (Tucson, AZ), 1990)
      This paper looks at two recent approaches to accentuation in Ancient Greek, Steriade 1988 and Sauzet 1989. Both Steriade and Sauzet include treatments of enclitic accentuation in Ancient Greek which I will argue need to be revised. Steriade offers a metrical analysis that is consistent with most of the data but theoretically suspect. Sauzet 1989 offers a mixed metrical/autosegmental account that is theoretically more appealing but-fails to account for established generalizations about enclitic accentuation. I will adopt the general framework of Sauzet, which seems to be more in line with normal (non -enclitic) accentuation in Ancient Greek, but revise his analysis of enclitic accent. The result, I hope, will be a more insightful approach to enclitic accent than either Steriade's or Sauzet's. An added bonus of the present analysis is that it uses the same footing procedures that Allen (1973 ) has motivated independently for Ancient Greek primary and secondary stress- -this is true of neither Sauzet's nor Steriade's analyses.
    • Hypocoristic Formation in Nootka

      Stonham, John; Myers, James; Pérez, Patricia E.; Stanford University (Department of Linguistics, University of Arizona (Tucson, AZ), 1990)
      In Nootka, there is a strategy for forming hypocoristic names, or terms of endearment, from the normal form of the name by a combination of truncation, vowel mutation and affixation. The nature of this formation is highly suggestive of the type of morphology described by many linguists as subtractive. In this paper, however, we will show that what actually occurs is a pattern of template -filling based on the prosodic structure of the language. We will argue that the building of hypocoristic forms is, in fact, highly reminiscent of reduplicative strategies employed in this language as argued for in Stonham 1987 for the closely related Nitinaht language, the difference being that reduplication subsequently concatenates with the structure it has drawn from, while Nootka hypocoristic formation, henceforth H.F., abandons the remainder of the original structure, retaining only the copied portion required for the template. Before investigating the nature of H.F., we will first present certain aspects of Nootka structure which will be important for a clear exposition of the problem.
    • Is Plane Conflation Bracket Erasure?

      Kang, Hyunsook; Crowhurst, Megan; University of Texas at Austin (Department of Linguistics, University of Arizona (Tucson, AZ), 1988)
    • Is Voicing a Privative Feature?

      Cho, Young-mee Yu; Myers, James; Pérez, Patricia E.; Stanford University (Department of Linguistics, University of Arizona (Tucson, AZ), 1990)
      A typology of voicing assimilation has been presented in Cho (1990a), whose result will be summarized in section 2. Like many other marked assimilations, voicing assimilation is characterized as spreading of only one value of the feature [voice]. The main body of this paper will compare a privative theory of voicing with a binary theory. It has often been noted that assimilation rules are natural rules since they are cross - linguistically very common. It has also been observed that they are asymmetric in nature (Schachter 1969, Schane 1972). For example, nasalization, palatalization, and assimilation of coronals to noncoronals are extremely common but the reverse processes are not frequently found in natural languages. On the other hand, voicing assimilation has been known to be relatively free in choosing its propagating value. Whereas the other assimilation rules are sensitive to the marked and the unmarked value of a given feature, assimilating a voiced consonant to a voiceless consonant has been assumed to be as natural as the reverse process (Anderson 1979, Mohanan (forthcoming)). I have argued that voicing assimilation is no different in its asymmetry from the other types of assimilation by demonstrating the need for two parameters and one universal delinking rule. A universal typology emerges from the possible interaction among the values associated with delinking and spreading parameters. The following theoretical assumptions will be utilized throughout this paper. First, I follow the standard assumption in Autosegmental Phonology that assimilation rules involve not a change or a copy but a reassociation of the features. This operation of reassociation called spreading is assumed to be the sole mechanism of assimilation rules (Goldsmith 1979, Steriade 1982, Hayes 1986). Second, I assume Underspecification Theory, which requires that some feature values be unspecified in the underlying representation ( Kiparsky. 1982, Archangeli and Pulleyblank (forthcoming)). Distinguishing different versions of Underspecification Theory will not be relevant in the discussion since I will discuss whether voicing is universally a privative feature or a binary opposition. Third, I assume the principle of Structure Preservation (Kiparsky 1985, Borowsky 1986), which is expressed in terms of constraints that apply in underlying representations and to each stage in the derivation up to the level at. which they are turned off (usually in the lexicon). Structure Preservation will be invoked to classify obstruents on the one hand, and sonorants and the other redundantly voiced segments on the other. Last, I translate the Classical Praguean conception of the relation between neutralization and assimilation into the autosegmental framework, and assume that assimilation is always feature-filling. All instances of the effect of feature-changing assimilation rules, then, are the result of two independent rules of (1) delinking and (2) spreading (Poser 1982, Mascaró 1987).
    • Izi Vowel Harmony and Selective Cyclicity

      Gerfen, Chip; Ann, Jean; Yoshimura, Kyoko; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1991)
      In this paper, I provide an analysis of vowel harmony in Izi, an Igbo language spoken in the East - Central State of Nigeria. Using data from Meier, Meier, and Samuel (1975; hereafter MMB), I argue that harmony in complex verbal structures in Izi is inadequately accounted for within a level ordered model of lexical phonology (Kiparsky 1982, Mohanan 1982, etc...), claiming instead that harmony facts are best accommodated within a non-level ordered approach (cf. Halle and Vergnaud 1987, Halle and Kenstowicz 1991; Halle, Harris, and Vergnaud 1991). In sections 1 and 2, I provide a description of the general pattern of the [ATR]-based vowel harmony system in Izi and motivate [+ATR] as the only value of the feature [ATR] present at the level of underlying representation. In section 3, data are presented demonstrating the inadequacy of a level -ordered treatment of vowel harmony in verbal structures. Finally, in section 4, I propose an alternative, non-level ordered analysis that derives the attested harmony facts via cyclic rule application at a single level. Crucially, particular morphemes in verbal structures are claimed to undergo a pass of the cyclic rules prior to concatenation, a phenomenon which I call selective cyclicity.
    • Less Stress, Less Pressure, Less Voice

      Miyashita, Mizuki; Maye, Jessica; Miyashita, Mizuki; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1998)
      In this paper, I provide an analysis of Tohono O'odham vowel devoicing with respect to physiological explanation. There are three points in this paper. First, this paper provides data of devoicing (consonants and vowels) in Tohono O'odham. Second, analysis of devoicing in terms of subglottal pressure drop is provided. Third, the devoicing is accounted for within the framework of OT (McCarthy and Prince 1993, Prince and Smolensky 1993). The organization of the paper is as follows. In section 2, the background of the language including both voiced and voiceless vowels is described. In section 3, the data of Tohono O'odham words with voiceless vowels are provided. Then the distribution of devoiced segments is discussed. In section 4, an analysis of devoicing with respect to subglottal pressure drop is presented with schematic diagrams. Then an OT account utilizing phonetic constraints is presented.
    • Level-ordered Lexical Insertion: Evidence from Speech Errors

      Golston, Chris; Ann, Jean; Yoshimura, Kyoko; University of California, Los Angeles (Department of Linguistics, University of Arizona (Tucson, AZ), 1991)
    • Marshallese Single Segment Reduplication

      Spring, Cari; Crowhurst, Megan; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1988)
    • Menomini Vowel Harmony: O(pacity) & T(ransparency) in OT

      Archangeli, Diana; Suzuki, Keiichiro; Suzuki, Keiichiro; Elzinga, Dirk; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1995)
    • The Morphemic Plane Hypothesis and Plane Internal Phonological Domains

      Ishihara, Masahide; Fulmer, S. Lee; Ishihara, Masahide; Wiswall, Wendy; University of Arizona (Department of Linguistics, University of Arizona (Tucson, AZ), 1989)
    • Multiple Scansions in Loanword Phonology: Evidence from Cantonese

      Silverman, Daniel; Ann, Jean; Yoshimura, Kyoko; University of California, Los Angeles (Department of Linguistics, University of Arizona (Tucson, AZ), 1991)
      In loanword phonology we seek to uncover the processes by which speakers possessing one phonological system perceive, apply native representational constraints on, and ultimately produce forms which have been generated by a different phonological system. In other words, loanwords do not come equipped with their own phonological representation. For any phonetic string, it is only native speakers for whom a fully articulated phonological structure is present. As host language speakers perceive foreign forms solely in accordance with their own phonological system, they instantiate native representations on the acoustic signal, fitting the superficial input into their own phonological system as closely as possible. Given these assumptions, it should not be surprising that despite the identity of a given acoustic signal when impinging upon the inner ear of speakers of different languages, this input may be represented, and ultimately produced in a distinct manner in each language it enters. The loanword phonology under investigation here, that of Cantonese, will be shown to possess two distinct levels. The first level of loanword phonology consists of a parsing of the input signal into unprosodized segment-sized chunks, for which native feature matrices are provided. As this level of loanword phonology is solely concerned with perceiving the input, and providing a preliminary linguistic representation, we may refer it as the Perceptual Level. It is only when full prosodic structure is supplied for the incoming form that the raw segmental material may undergo phonological processes, so that it may be realized in conformity with native prosodic constraints on syllable structure. As this stage of the loanword phonology admits the possibility of true phonological processes acting on segments, it may be regarded as the Operative Level of the loanword phonology. The processes which apply at the Operative Level of the Cantonese loanword phonology do not exist in native phonological derivations. As these operations were not acquired during the initial acquisition period, they exist in a separate domain from native phonological operations, presumably supplied by Universal Grammar. Their only property common with native phonological processes is that the same constraints exert an influence on the output of both systems. I will provide evidence for the Perceptual Level and the Operative Level of the loanword phonology by showing that loanwords undergo two distinct, ordered scansions during the course of the derivation. Scansion One will be shown to correspond to the Perceptual Level of the loanword phonology, providing raw segmental representation to incoming forms. Scansion Two will be shown to correspond to the Operative Level of the loanword phonology, providing prosodic representation which will be shown to trigger various phonological operations on the perceived segments.