Tools for Analyzing Talk

 

Part 3:  Morphosyntactic Analysis

 

 

Brian MacWhinney

Carnegie Mellon University

 

March 30, 2019


https://doi.org/10.21415/T5B97X

 

 

 

 

 

 

When citing the use of TalkBank and CHILDES facilities, please use this reference to the last printed version:

 

MacWhinney, B. (2000).  The CHILDES Project: Tools for Analyzing Talk. 3rd Edition.  Mahwah, NJ: Lawrence Erlbaum Associates

 

This allows us to systematically track usage of the programs and data through scholar.google.com.


 

Tools for Analyzing Talk. 1

Part 3:  Morphosyntactic Analysis. 1

Brian MacWhinney. 1

1      Introduction.. 4

2      Morphosyntactic Coding. 5

2.1       One-to-one correspondence. 5

2.2       Tag Groups and Word Groups. 6

2.3       Words. 6

2.4       Part of Speech Codes. 7

2.5       Stems. 8

2.6       Affixes. 8

2.7       Clitics. 9

2.8       Compounds. 10

2.9       Punctuation Marks. 11

2.10     Sample Morphological Tagging for English. 11

3      Running the Program Chain. 14

4      Morphological Analysis. 15

4.1       The Design of MOR.. 15

4.2       Example Analyses. 15

4.3       Exclusions in MOR.. 16

4.4       Unique Options. 16

4.5       Categories and Components of MOR.. 17

4.6       MOR Part-of-Speech Categories. 18

4.7       MOR Grammatical Categories. 21

4.8       Compounds and Complex Forms. 22

4.9       Errors and Replacements. 23

4.10     Affixes. 24

4.11     Control Features and Output Features. 24

5      Correcting errors. 26

5.1       Lexicon Building. 28

5.2       Disambiguator Mode. 29

6      A Formal Description of the Rule Files. 30

6.1       Declarative structure. 30

6.2       Pattern-matching symbols. 30

6.3       Variable notation.. 31

6.4       Category Information Operators. 31

6.5       Arules. 32

6.6       Crules. 34

7      Building new MOR grammars. 36

7.1       minMOR.. 36

7.2       Adding affixes. 36

7.3       Interactive MOR.. 37

7.4       Testing. 37

7.5       Building Arules. 38

7.6       Building crules. 39

8      MOR for Bilingual Corpora. 42

9      POST.. 44

9.1       POSTLIST.. 45

9.2       POSTMODRULES. 46

9.3       PREPOST.. 46

9.4       POSTMORTEM.. 47

9.5       POSTTRAIN.. 48

9.6       POSTMOD.. 51

9.7       TRNFIX.. 51

10       GRASP – Syntactic Dependency Analysis. 52

10.1     Grammatical Relations. 52

10.2     Predicate-head relations. 53

10.3     Argument-head relations. 55

10.4     Extra-clausal elements. 57

10.5     Cosmetic relations. 57

10.6     MEGRASP.. 58

11       Building a training corpus. 60

11.1     OBJ and OBJ2.. 60

11.2     JCT, NJCT and POBJ. 61

11.3     PRED.. 62

11.4     AUX. 63

11.5     NEG.. 64

11.6     MOD and POSS. 64

11.7     CONJ and COORD.. 65

11.8     ENUM... 65

11.9     POSTMOD.. 67

11.10      COMP, LINK.. 68

11.11      QUANT and PQ.. 69

11.12      CSUBJ, COBJ, CPOBJ, CPRED.. 70

11.13      CJCT and XJCT.. 72

11.14      CMOD and XMOD.. 73

11.15      BEG, BEGP, END, ENDP.. 74

11.16      COM and TAG.. 75

11.17      SRL, APP.. 76

11.18      NAME, DATE.. 77

11.19      INCROOT, OM.. 78

12       GRs for other languages. 80

12.1     Spanish.. 80

12.2     Chinese. 80

12.3     Japanese. 82

 

1       Introduction

 

This third volume of the TalkBank manuals deals with the use of the programs that perform automatic computation of the morphosyntactic structure of transcripts in CHAT format.  These manuals, the programs, and the TalkBank datasets can all be downloaded freely from https://talkbank.org.

The first volume of the TalkBank manual describes the CHAT transcription format. The second volume describes the use of the CLAN data analysis programs. This third manual describes the use of the MOR, POST, POSTMORTEM, and MEGRASP programs to add a %mor and %gra line to CHAT transcripts.  The %mor line provides a complete part-of-speech tagging for every word indicated on the main line of the transcript.  The %gra line provides a further analysis of the grammatical dependencies between items in the %mor line.  These programs for morphosyntactic analysis are all built into CLAN. 

Users who do not wish to create or process information on the %mor and %gra lines will not need to read this current manual.  However, researchers and clinicians interested in these features will need to know the basics of the use of these programs, as described in the next chapter.  The additional sections of this manual are directed to researchers who wish to extend or improve the coverage of MOR and GRASP grammars or who wish to build such grammars for languages that are not yet covered.

 

2       Morphosyntactic Coding

Linguists and psycholinguists rely on the analysis of morphosyntax to illuminate core issues in learning and development. Generativist theories have emphasized issues such as: the role of triggers in the early setting of a parameter for subject omission (Hyams & Wexler, 1993), evidence for advanced early syntactic competence (Wexler, 1998), evidence for early absence functional categories that attach to the IP node (Radford, 1990), the role of optional infinitives in normal and disordered acquisition (Rice, 1997), and the child’s ability to process syntax without any exposure to relevant data (Crain, 1991). Generativists have sometimes been criticized for paying inadequate attention to the empirical patterns of distribution in children’s productions.  However, work by researchers in this tradition, such as Stromswold (1994), van Kampen (1998), and Meisel (1986), demonstrates the important role that transcript data can play in evaluating alternative generative accounts.

Learning theorists have placed an even greater emphasis on the use of transcripts for understanding morphosyntactic development.  Neural network models have shown how cue validities can determine the sequence of acquisition for both morphological (MacWhinney & Leinbach, 1991; MacWhinney, Leinbach, Taraban, & McDonald, 1989; Plunkett & Marchman, 1991) and syntactic (Elman, 1993; Mintz, Newport, & Bever, 2002; Siskind, 1999) development.  This work derives further support from a broad movement within linguistics toward a focus on data-driven models (Bybee & Hopper, 2001) for understanding language learning and structure.  These accounts formulate accounts that view constructions (Tomasello, 2003) and item-based patterns (MacWhinney, 1975) as the loci for statistical learning.

The study of morphosyntax also plays an important role in the study and treatment of language disorders, such as aphasia, specific language impairment, stuttering, and dementia. For this work, both researchers and clinicians can benefit from methods for achieving accurate automatic analysis of correct and incorrect uses of morphosyntactic devices.  To address these needs, the TalkBank system uses the MOR command to automatically generate candidate morphological analyses on the %mor tier, the POST command to disambiguate these analyses, and the MEGRASP command to compute grammatical dependencies on the %gra tier.

2.1      One-to-one correspondence

MOR creates a %mor tier with a one-to-one cor­respondence between words on the main line and words on the %mor tier. In order to achieve this one-to-one correspondence, the following rules are observed:

1.     Each word group (see below) on the %mor line is surrounded by spaces or an initial tab to correspond to the corresponding space-de­limited word group on the main line.  The correspondence matches each %mor word (morphological word) to a main line word in a left-to-right linear order in the utterance.

2.     Utterance delimiters are preserved on the %mor line to facilitate readability and analysis.  These delimiters should be the same as the ones used on the main line.

3.     Along with utterance delimiters, the satellite markers of for the vocative and „ for tag questions or dislocations are also included on the %mor line in a one-to-one alignment format.

4.     Retracings and repetitions are excluded from this one-to-one mapping, as are nonwords such as xxx or strings beginning with &. When word repetitions are marked in the form word [x 3], the material in parentheses is stripped off and the word is considered as a single form.

5.     When a replacing form is indicated on the main line with the form [: text], the material on the %mor line corresponds to the replacing material in the square brackets, not the material that is being replaced. For example, if the main line has gonna [: going to], the %mor line will code going to.

6.     The [*] symbol that is used on the main line to indicate errors is not duplicated on the %mor line.

2.2      Tag Groups and Word Groups

On the %mor line, alternative taggings of a given word are clustered together in tag groups. These groups include the alternative taggings of a word that are produced by the MOR program.  Alternatives are separated by the ^ character. Here is an example of a tag group for one of the most ambiguous words in English:

adv|back^adj|back^n|back^v|back

After you run the POST program on your files, all of these alternatives will be disambiguated and each word will have only one alternative.  You can also use the hand disambiguation method built into the CLAN editor to disambiguate each tag group case by case.

The next level of organization for the MOR line is the word group.  Word groups are combinations marked by the preclitic delimiter $, the postclitic delimiter ~ or the compound delimiter +.  For example, the Spanish word dámelo can be represented as

vimpsh|da-2S&IMP~pro:clit|1S~pro:clit|OBJ&MASC=give

This word group is a series of three words (verb~postclitic~postclitic) combined by the ~ marker. Clitics may be either preclitics or postclitics. Separable prefixes of the type found in German or Hungarian and other discontinuous morphemes can be represented as word groups using the preclitic delimiter $, as in this example for ausgegangen (“gone”):

prep|aus$PART#v|geh&PAST:PART=go

Note the difference between the coding of the preclitic “aus” and the prefix “ge” in this example. Compounds are also represented as combinations, as in this analysis of angel+fish.

n|+n|angel+n|fish

Here, the first characters (n|) represent the part of speech of the whole compound and the subsequent tags, after each plus sign, are for the parts of speech of the components of the compound.  Proper nouns are not treated as compounds.  Therefore, they take forms with underlines instead of pluses, such as Luke_Skywalker or New_York_City.

2.3      Words

Beneath the level of the word group is the level of the word. The structure of each individual word is:

prefix#

part-of-speech|

stem

&fusionalsuffix

-suffix

=english (optional, underscore joins words)

There can be any number of prefixes, fusional suffixes, and suffixes, but there should be only one stem. Prefixes and suffixes should be given in the order in which they occur in the word. Since fusional suffixes are fused parts of the stem, their order is indeterminate. The English translation of the stem is not a part of the morphology, but is included for convenience for non-native speakers.   If the English translation requires two words, these words should be joined by an underscore as in “lose_flowers” for French défleurir.

Now let us look in greater detail at the nature of each of these types of coding. Through­out this discussion, bear in mind that all coding is done on a word-by-word basis, where words are considered to be strings separated by spaces.

2.4      Part of Speech Codes

The morphological codes on the %mor line begin with a part-of-speech code. The basic scheme for the part-of-speech code is:

category:subcategory:subcategory

Additional fields can be added, using the colon character as the field separator. The subcategory fields contain information about syntactic features of the word that are not marked overtly. For example, you may wish to code the fact that Italian “andare” is an intransitive verb even though there is no single morpheme that signals intransitivity. You can do this by using the part-of-speech code v:intrans, rather than by inserting a separate morpheme.

In order to avoid redundancy, information that is marked by a prefix or suffix is not in­corporated into the part-of-speech code, as this information will be found to the right of the | delimiter. These codes can be given in either uppercase, as in ADJ, or lowercase, as in adj. In general, CHAT codes are not case-sensitive.

The particular codes given below are the ones that MOR uses for automatic morpho­logical tagging of English. Individual researchers will need to define a system of part-of-speech codes that correctly reflects their own research interests and theoretical commit­ments. Languages that are typologically quite different from English may have to use very different part-of-speech categories. Quirk, Greenbaum, Leech, and Svartvik (1985) explain some of the intricacies of part-of-speech coding.  Their analysis should be taken as defini­tive for all part-of-speech coding for English.    However, for many purposes, a more coarse-grained coding can be used.

The following set of top-level part-of-speech codes is the one used by the MOR pro­gram.  Additional refinements to this system can be found by studying the organization of the lexicon files for that program  For example, in MOR, numbers are coded as types of determiners, because this is their typical usage.  The word “back” is coded as either a noun, verb, preposition, or adjective.  Further distinctions can be found by looking at the MOR lexicon.

Major Parts of Speech

 

Category

Code

Adjective

ADJ

Adverb

ADV

Communicator

CO

Conjunction

CONJ

Determiner

DET

Filler

FIL

Infinitive marker to

INF

Noun

N

Proper Noun

N:PROP

Number

DET:NUM

Particle

PTL

Preposition

PREP

Pronoun

PRO

Quantifier

QN

Verb

V

Auxiliary verb, including modals

V:AUX

WH words

WH

 

2.5      Stems

Every word on the %mor tier must include a “lemma” or stem as part of the morpheme analysis. The stem is found on the right hand side of the | delimiter, following any pre-clitics or prefixes. If the transcript is in English, this can be simply the canonical form of the word. For nouns, this is the singular. For verbs, it is the infinitive. If the transcript is in another language, it can be the English translation. A single form should be selected for each stem. Thus, the English indefinite article is coded as det|a with the lemma “a” whether or not the actual form of the article is “a” or “an.”

 

When English is not the main language of the transcript, the transcriber must decide whether to use English stems. Using English stems has the advantage that it makes the cor­pus more available to English-reading researchers. To show how this is done, take the Ger­man phrase “wir essen”:

*FRI:   wir essen.

%mor:   pro|wir=we v|ess-INF=eat .

Some projects may have reasons to avoid using English stems, even as translations. In this example, “essen” would be simply v|ess-INF. Other projects may wish to use only English stems and no target-language stems. Sometimes there are multiple possible trans­lations into English. For example, German “Sie”/sie” could be either “you,” “she,” or “they.”  Choosing a single English meaning for the stem helps fix the German form.

2.6      Affixes

Affixes and clitics are coded in the position in which they occur with relation to the stem. The morphological status of the affix should be identified by the following markers or delimit­ers: - for a suffix, # for a prefix, and & for fusional or infixed morphology.

The & is used to mark affixes that are not realized in a clearly isolable phonological shape. For example, the form “men” cannot be broken down into a part corresponding to the stem “man” and a part corresponding to the plural marker, because one cannot say that the vowel “e” marks the plural. For this reason, the word is coded as n|man&PL. The past forms of irregular verbs may undergo similar ablaut processes, as in “came,” which is cod­ed v|come&PAST, or they may undergo no phonological change at all, as in “hit”, which is coded v|hit&PAST.  Sometimes there may be several codes indicated with the & after the stem. For example, the form “was” is coded v|be&PAST&13s.  Affix and clitic codes are based either on Latin forms for grammatical function or English words corresponding to particular closed-class items. MOR uses the following set of affix codes for automatic morphological tagging of English.

 

Inflectional Affixes for English

 

Function

Code

adjective suffix er, r

CP

adjective suffix est, st

SP

noun suffix ie

DIM

noun suffix s, es

PL

noun suffix 's, '

POSS

verb suffix s, es

3S

verb suffix ed, d

PAST

verb suffix ing

PRESP

verb suffix en

PASTP

 

Derivational Affixes for English

 

Function

Code

adjective and verb prefix un

UN

adverbializer ly

LY

nominalizer er

ER

noun prefix ex

EX

verb prefix dis

DIS

verb prefix mis

MIS

verb prefix out

OUT

verb prefix over

OVER

verb prefix pre

PRE

verb prefix pro

PRO

verb prefix re

RE

 

2.7      Clitics

Clitics are marked by a tilde, as in v|parl&IMP:2S=speak~pro|DAT:MASC:SG for Ital­ian “parlagli” and pro|it~v|be&3s for English “it's.” Note that part of speech coding with the | symbol is repeated for clitics after the tilde. Both clitics and contracted elements are coded with the tilde. The use of the tilde for contracted elements extends to forms like “sul” in Italian, “ins” in German, or “rajta” in Hungarian in which prepositions are merged with articles or pronouns.

 

Clitic Codes for English

 

Clitic

Code

noun phrase post-clitic 'd

v:aux|would, v|have&PAST

noun phrase post-clitic 'll

v:aux|will

noun phrase post-clitic 'm

v|be&1S, v:aux|be&1S

noun phrase post-clitic 're

v|be&PRES, v:aux|be&PRES

noun phrase post-clitic 's

v|be&3S, v:aux|be&3S

verbal post-clitic n't

neg|not

2.8      Compounds

Here are some words that we might want to treat as compounds: sweat+shirt, tennis+court, bathing+suit, high+school, play+ground, choo+choo+train, rock+'n’+roll, and sit+in. There are also many idiomatic phrases that could be best analyzed as linkages. Here are some examples: a_lot_of, all_of_a_sudden, at_last, for_sure, kind_of, of_course, once_and_for_all, once_upon_a_time, so_far, and lots_of.

On the %mor tier it is necessary to assign a part-of-speech label to each segment of the compound. For example, the word blackboard or black+board  is coded on the %mor tier as n|+adj|black+n|board. Although the part of speech of the compound as a whole is usually given by the part-of-speech of the final segment, forms such as make+believe which is coded as adj|+v|make+v|believe show that this is not always true.

In order to preserve the one-to-one correspondence between words on the main line and words on the %mor tier, words that are not marked as compounds on the main line should not be coded as compounds on the %mor tier. For example, if the words “come here” are used as a rote form, then they should be written as “come_here” on the main tier. On the %mor tier this will be coded as v|come_here. It makes no sense to code this as v|come+adv|here, because that analysis would contradict the claim that this pair functions as a single unit. It is sometimes difficult to assign a part-of-speech code to a morpheme. In the usual case, the part-of-speech code should be chosen from the same set of codes used to label single words of the language. For example, some of these idiomatic phrases can be coded as compounds on the %mor line.

 

Phrases Coded as Linkages

 

Phrase

Phrase

qn|a_lot_of

adv|all_of_a_sudden

 co|for_sure

adv:int|kind_of

adv|once_and_for_all

adv|once_upon_a_time

adv|so_far

qn|lots_of.

2.9      Punctuation Marks

MOR can be configured to recognize certain punctuation marks as whole word characters.  In particular, the file punct.cut contains these entries:

      {[scat end]} "end"

      {[scat beg]} "beg"

,       {[scat cm]} "cm"

      {[scat bq]} "bq"

      {[scat eq]} "eq"

       {[scat bq]} “bq2”

       {[scat eq]} “eq2”

When the punctuation marks on the left occur in text, they are treated as separate lexical items and are mapped to forms such as beg|beg on the %mor tier.  The “end” marker is used to mark postposed forms such as tags and sentence final particles.  The “beg” marker is used to mark preposed forms such as vocatives and communicators.  The “bq” marks the beginning of a quote and the “eq” marks the end of a quote.  These special characters are important for correctly structuring the dependency relations for the GRASP program.

2.10  Sample Morphological Tagging for English

The following table describes and illustrates a more detailed set of word class codings for English. The %mor tier examples correspond to the labellings MOR produces for the words in question. It is possible to augment or simplify this set, either by creating additional word categories, or by adding additional fields to the part-of-speech label, as discussed pre­viously.  The entries in this table and elsewhere in this manual can always be double-checked against the current version of the MOR grammar by typing “mor +xi” to bring up interactive MOR and then entering the word to be analyzed.

 

Word Classes for English

 

Class

Examples

Coding of Examples

adjective

big

adj|big

adjective, comparative

bigger, better

adj|big-CP, adj|good&CP

adjective, superlative

biggest, best

adj|big-SP, adj|good&SP

adverb

well

adv|well

adverb, ending in ly

quickly

adv:adj|quick-LY

adverb, intensifying

very, rather

adv:int|very, adv:int|rather

adverb, post-qualifying

enough, indeed

adv|enough, adv|indeed

adverb, locative

here, then

adv:loc|here, adv:tem|then

communicator

aha

co|aha

conjunction, coord.

and, or

conj:coo|and, conj:coo|or

conjunction, subord.

if, although

conj:sub|if, conj:sub|although

determiner, singular

a, the, this

det|a, det|this

determiner, plural

these, those

det|these, det|those

determiner, possessive

my, your, her

det:poss|my

infinitive marker

to

inf|to

noun, common

cat, coffee

n|cat, n|coffee

noun, plural

cats

n|cat-PL

noun, possessive

cat's

n|cat~poss|s

noun, plural possessive

cats'

n|cat-PL~poss|s

noun, proper

Mary

n:prop|Mary

noun, proper, plural

Mary-s

n:prop|Mary-PL

noun, proper, possessive

Mary's

n:prop|Mary~poss|s

noun, proper, pl. poss.

Marys'

n:prop|Mary-PL~poss|s

noun, adverbial

home, west

n|home, adv:loc |home

number, cardinal

two

det:num|two

number, ordinal

second

adj|second

postquantifier

all, both

post|all, post|both

preposition

in

prep|in, adv:loc|in

pronoun, personal

I, me, we, us, he

pro|I, pro|me, pro|we, pro|us

pronoun, reflexive

myself, ourselves

pro:refl|myself

pronoun, possessive

mine, yours, his

pro:poss|mine, pro:poss:det|his

pronoun, demonstrative

that, this, these

pro:dem|that

pronoun, indefinite

everybody, nothing

pro:indef|everybody

pronoun, indef., poss.

everybody's

pro:indef|everybody~poss|s

quantifier

half, all

qn|half, qn|all

verb, base form

walk, run

v|walk, v|run

verb, 3rd singular present

walks, runs

v|walk-3S, v|run-3S

verb, past tense

walked, ran

v|walk-PAST, v|run&PAST

verb, present participle

walking, running

part|walk-PRESP, part|run-PRESP

verb, past participle

walked, run

part|walk-PASTP, part|run&PASTP

verb, modal auxiliary

can, could, must

aux|can, aux|could, aux|must

 

Since it is sometimes difficult to decide what part of speech a word belongs to, we offer the following overview of the different part-of-speech labels used in the standard English grammar.

 

ADJectives modify nouns, either prenominally, or predicatively. Unitary compound modi­fiers such as good-looking should be labeled as adjectives.

 

ADVerbs cover a heterogenous class of words including: manner adverbs, which generally end in -ly; locative adverbs, which include expressions of time and place; intensifiers that modify adjectives; and post-head modifiers, such as indeed and enough.

 

COmmunicators are used for interactive and communicative forms which fulfill a variety of functions in speech and conversation. Also included in this category are words used to express emotion, as well as imitative and onomatopeic forms, such as ah, aw, boom, boom-boom, icky, wow, yuck, and yummy.

 

CONJunctions conjoin two or more words, phrases, or sentences. Examples include: although, because, if, unless, and until.

 

COORDinators include and, or, and as well as.  These can combine clauses, phrases, or words.

 

DETerminers include articles, and definite and indefinite determiners. Possessive deter­miners such as my and your are tagged det:poss.

 

INFinitive is the word “to” which is tagged inf|to.

 

INTerjections are similar to communicators, but they typically can stand alone as complete utterances or fragments, rather than being integrated as parts of the utterances.  They include forms such as wow, hello, good-morning, good-bye, please, thank-you.

 

Nouns are tagged with n for common nouns, and n:prop for proper nouns (names of peo­ple, places, fictional characters, brand-name products).

 

NEGative is the word “not” which is tagged neg|not.

 

NUMbers  are labelled num for cardinal numbers. The ordinal numbers are adjectives.

 

Onomatopoeia are words that imitate the sounds of nature, animals, and other noises.

 

Particles are words that are often also prepositions, but are serving as verbal particles.

 

PREPositions are the heads of prepositional phrases. When a preposition is not a part of a phrase, it should be coded as a particle or an adverb.

 

PROnouns include a variety of structures, such as reflexives, possessives, personal pronouns, deictic pronouns, etc.

 

QUANTifiers include each, every, all, some, and similar items.

 

Verbs can be either main verbs, copulas, or auxililaries.

 

3       Running the Program Chain

 

It is possible to construct a complete automatic morphosyntacgtic analysis of a series of CHAT transcripts through a single command in CLAN, once you have the needed programs in the correct configuration.  This command runs the MOR, POST, POSTMORTEM, and MEGRASP commands in an automatic sequence or chain. To do this, you follow these steps:

1.     Place all the files you wish to analyze into a single folder.

2.     Start the CLAN program (see the Part 2 of the manual for instructions on installing CLAN).

3.     In CLAN’s Commands window, click on the buttom labelled Working to set your working directory to the folder that has the files to be analyzed.

4.     Under the File menu at the top of the screen, select Get MOR Grammar and select the language you want to analyze.  To do this, you must be connected to the Internet. If you have already done this once, you do not need to do it again.  By default, the MOR grammar you have chosen will download to your desktop.

5.     If you choose to move your MOR grammar to another location, you will need to use the Mor Lib button in the Commands window to tell CLAN about where to locate it.

6.     To analyze all the files in your Working directory folder, enter this command in the Comands window: mor *.cha

7.     CLAN will then run these programs in sequence: MOR, POST, POSMORTEM, and MEGRASP. These programs will add %mor and %gra lines to your files.

8.     If this command ends with a message saying that some words were not recognized, you will probably want to fix them.  If you do not, some of the entries on the %mor line will be incomplete and the relations on the %gra line will be less accurate. If you have doubts about the spellings of certain words, you can look in the 0allwords.cdc file this is included in the /lex folder for each language.  The words there are listed in alphabetical order.

9.     To correct errors, you can run this command:  mor +xb *.cha.. Guidelines for fixing errors are given in chapter 4 below.

4       Morphological Analysis

4.1      The Design of MOR

The computational design of mor was guided by Roland Hausser’s (1990) MORPH system and was implemented by Mitzi Morris. Since 2000, Leonid Spektor has extended MOR in many ways.  Christophe Parisse built POST and POSTTRAIN (Parisse & Le Normand, 2000). Kenji Sagae built MEGRASP as a part of his dissertation work for the Language Technologies Institute at Carnegie Mellon University (Sagae, MacWhinney, & Lavie, 2004a, 2004b).  Leonid Spektor then integrated the program into CLAN.

The system has been designed to maximize portability across languages, extendability of the lexicon and grammar, and compatibility with the clan programs. The basic engine of the parser is language independent. Lan­guage-specific information is stored in separate data files that can be modified by the user. The lexical entries are also kept in ASCII files and there are several techniques for improving the match of the lexicon to a cor­pus. To maximize the complete analysis of regular formations, only stems are stored in the lexicon and inflected forms appropriate for each stem are compiled at run time.

4.2      Example Analyses

To give an example of the results of a MOR analysis for English, consider this sentence from eve15.cha in Roger Brown’s corpus for Eve. 

*CHI:   oops I spilled it.

%mor:   co|oops pro:subj|I v|spill-PAST pro:per|it.

Here, the main line gives the child’s production and the %mor line gives the part of speech for each word, along with the morphological analysis of affixes, such as the past tense mark (-PAST) on the verb.  The %mor lines in these files were not created by hand.  To produce them, we ran the MOR command, using the MOR grammar for English, which can be downloaded using the Get MOR Grammar function described in the previous chapter. The command for running MOR by itself without running the rest of the chain is: mor +d *.cha. After running MOR, the file looks like this:

*CHI:  oops I spilled it .

%mor:  co|oops pro:subj|I part|spill-PASTP^v|spill-PAST pro:per|it .

In the %mor tier, words are labeled by their syntactic category or “scat”, followed by the pipe separator |, followed then by the stem and affixes. Notice that the word “spilled” is initially ambiguous between the past tense and participle readings. The two ambiguities are separated by the ^ character.  To resolve such ambiguities, we run a program called POST. The command is just “post *.cha” After POST has been run, the %mor line will only have v|spill-PAST. 

Using this disambiguated form, we can then run the MEGRASP program to create the representation given in the %gra line below:

*CHI:   oops I spilled it .

%mor:   co|oops pro:subj|I v|spill-PAST pro:per|it .

%gra:   1|3|COM 2|3|SUBJ 3|0|ROOT 4|3|OBJ 5|3|PUNCT

In the %gra line, we see that the second word “I” is related to the verb (“spilled”) through the grammatical relation (GR) of Subject.  The fourth word “it” is related to the verb through the grammatical relation of Object.  The verb is the Root and it is related to the “left wall” or item 0.

4.3      Exclusions in MOR

Because MOR focuses on the analysis of the target utterance, it excludes a variety of non-words, retraces, and special symbols. Specifically, MOR excludes:

1.     Items that start with &

2.     Pauses such as (.)

3.     Unknown forms marked as xxx, yyy, or www

4.     Data associated with these codes: [/?],  [/-], [/], [//], and [///].

4.4      Unique Options

+d     do not run POST command automatically.  POST will run automatically after MOR, unless this switch is used or unless the folder name includes the word “train”.

 

+eS    Show the result of the operation of the arules on either a stem S or stems in file @S.  This output will go into a file called debug.cdc in your library directory.  An­other way of achieving this is to use the +d option inside “interactive MOR”

 

+p     use pinyin lexicon format for Chinese

 

+xi     Run mor in the interactive test mode. You type in one word at a time to the test prompt and mor provides the analysis on line.  This facility makes the following commands available in the CLAN Output window:

        word - analyze this word

        :q  quit- exit program

        :c  print out current set of crules

        :d  display application of arules.

        :l  re-load rules and lexicon files

        :h  help - print this message

 

If you type in a word, such as “dog” or “perro,” MOR will try to analyze it and give you its components morphemes.  If you change the rules or the lexicon, use :l to reload and retest.  The :c and :d switches will send output to a file called de­bug.cdc in your library directory.

 

+xl     Run mor in the lexicon building mode. This mode takes a series of .cha files as input and outputs a small lexical file with the extension .ulx with entries for all words not recognized by mor. This helps in the building of lexicons.

 

+xb   check lexicon mode, include word location in data files

+xa    check lexicon for ambiguous entries

+xc    check lexicon mode, including capitalized words

+xd   check lexicon for compound words conflicting with plain words

+xp   check lexicon mode, including words with prosodic symbols

+xy    analyze words in lex files

4.5      Categories and Components of MOR

MOR breaks up words into their component parts or morphemes.  In a relatively analytic language like English, many words require no analysis at all.  However, even in English, a word like “coworkers” can be seen to contain four component morphemes, including the prefix “co”, the stem, the agential suffix, and the plural.  For this form, MOR will produce the analysis: co#n:v|work-AGT-PL.  This representation uses the symbols # and – to separate the four different morphemes.  Here, the prefix stands at the beginning of the analysis, followed by the stem (n|work), and the two suffixes.  In general, stems always have the form of a part of speech category, such as “n” for noun, followed by the vertical bar and then a statement of the stem’s lexical form. 

 

To understand the functioning of the MOR grammar for English, the best place to begin is with a tour of the files inside the ENG folder that you can download from the server.  At the top level, you will see these files:

1.     ar.cut – These are the rules that generate allomorphic variants from the stems and affixes in the lexical files.

2.     cr.cut – These are the rules that specify the possible combinations of morphemes going from left to right in a word.

3.     debug.cdc – This file holds the complete trace of an analysis of a given word by MOR.  It always holds the results of the most recent analysis.  It is mostly useful for people who are developing new ar.cut or cr.cut files as a way of tracing out or debugging problems with these rules.

4.     docs – This is a folder containing a file of instructions on how to train POST and a list of tags and categories used in the English grammar.

5.     post.db – This is a file used by POST and should be left untouched.

6.     ex.cut – This file includes analyses that are being “overgenerated” by MOR and should simply be filtered out or excluded whenever they occur.

7.     lex – This folder contains many files listing the stems and affixes of the language.  We will examine it in greater detail below.

8.     sf.cut – This file tells MOR how to deal with words that end with certain special form markers such as @b for babbling.

When examining these files and others, please note that the exact shapes of the files, the word listings, and the rules will change over time.  We recommend that users glance through these various files to understand their contents.

 

The first action of the parser program is to load the ar.cut file. Next the program reads in the files in your lexicon folder and uses the rules in ar.cut to build the run-time lexicon. Once the run-time lexi­con is loaded, the parser then reads in the cr.cut file. Additionally, if the +b option is spec­ified, the dr.cut file is also read in. Once the concatenation rules have been loaded the program is ready to analyze input words. As a user, you do not need to concern yourself about the run-time lexicon. Your main concern is about the entries in the lexicon files. For languages that already have a MOR grammar, the rules in the ar.cut and cr.cut files are only of concern if you wish to have a set of analyses and labelings that differs from the one given in the chapter on mor­phosyntactic coding, or if you are trying to write a new set of grammars for some language.

4.6      MOR Part-of-Speech Categories

The final output of MOR on the %mor line uses two sets of categories: part-of-speech (POS) names and grammatical categories.  To survey the part-of-speech names for English, we can take a look at the files contained inside the /lex folder.  These files break out the possible words of English into different files for each specific part of speech or compound structure.  Because these distinctions are so important to the correct transcription of child language and the correct running of MOR, it is worthwhile to consider the contents of each of these various files.  As the following table shows, about half of these word types involve different part of speech configurations within compounds. This analysis of compounds into their part of speech components is intended to further study of the child’s learning of compounds as well as to provide good information regarding the part of speech of the whole. The name of the compound files indicates their composition.  For example, the name adj+n+adj.cut indicates compounds with a noun followed by an adjective (n+adj) whose overall function is that of an adjective. This means that it is treated just as and adjective (adj) by the MOR and GRASP programs.  In English, the part of speech of a compound is usually the same as that of the last component of the compound.      A few additional part of speech (POS) categories are introduced by the 0affix.cut file.  These include: n-cl (noun clitic), v-cl (verb clitic), part (participle), and n:gerund (gerund). Additional categories on the %mor line are introduced from the special form marker file called sf.cut.  The meanings of these various special form markers are given in the CHAT manual.  Finally, the punctuation codes bq, eq, end, beg, and cm are the POS codes used for the special character marks given in the punct.cut file.


 

File (.cut)

POS

Function

Example

0affix

mixed

prefixes and suffixes

see expanded list below

0uk

mixed

terms local to the UK

fave, doofer, sixpence

adj-baby

adj

baby talk adjectives

dipsy, yumsy

adj-dup

adj

baby talk doubles

nice+nice, pink+pink

adj-ir

adj

irregular adjectives

better, furthest

adj-num

adj

ordinal numerals

eleventh

adj-pred

adj:pred

predicative adjectives

abreast, remiss

adj-under

adj

combined adjectives

close_by, lovey_dovey

adj

adj

regular adjectives

tall, redundant

adj+adj+adj

adj

compounds

half+hearted, hot+crossed

adj+adj+adj(on)

adj

compounds

super+duper, easy+peasy

adj+n+adj

adj

compounds

dog+eared, stir+crazy

adj+v+prep+n

adj

compounds

pay+per+view

adj+v+v

adj

compounds

make+believe, see+through

adv-tem

adv

temporal adverbs

tomorrow, tonight, anytime

adv-under

adv

combined adverbs

how_about, as_well

adv-wh

adv:wh

wh term

where, why

adv

adv

regular adverbs

ajar, fast, mostly

adv+adj+adv

adv

compounds

half+off, slant+wise

adv+adj+n

adv

compounds

half+way, off+shore

adv+n+prep+n

adv

compounds

face+to+face

co-cant

co

Cantonese forms

wo, wai, la

co-voc

co

vocatives

honey, dear, sir

co-rhymes

co

rhymes, onomatopoeia

cock_a_doodle_doo

co_under

co

multiword phrases

by_jove, gee_whiz

co

co

regular communicators

blah, byebye, gah, no

conj-under

conj

combined conjunctions

even_though, in_case_that

conj

conj

conjunctions

and, although, because

det-art

det, art

deictic determiners

this, that, the,

det-num

det:num

cardinals

two, twelve

n-abbrev

n

abbreviations

c_d, t_v, w_c

n-baby

n

babytalk forms

passie, wawa, booboo

n-dashed

n

noun combinations

cul_de_sac, seven_up

n-dup

n

duplicate nouns

cow+cow, chick_chick

n-irr

n

irregular nouns

children, cacti, teeth

n-loan

n

loan words

goyim, amigo, smuck

n-pluraletant

n:pt

nouns with no singular

golashes, kinesics, scissors

n

n

regular nouns

dog, corner, window

n+adj+n

n

compounds

big+shot, cutie+pie

n+adj+v+adj

n

compounds

merry+go+round

n+n+conj+n

n

compounds

four+by+four, dot+to+dot

n+n+n-on

n

compounds

quack+duck, moo+cow

n+n+n

n

compounds

candy+bar, foot+race

n+n+novel

n

compounds

children+bed, dog+fish

n+n+prep+det+n

n

compounds

corn+on+the+cob

n+on+on-baby

n

compounds

wee+wee, meow+meow

n+v+x+n

n

compounds

jump+over+hand

n+v+n

n

compounds

squirm+worm, snap+bead

n+prep

n

compounds

chin+up, hide+out

on

on

onomatopoeia

boom, choo_choo

on+on+on

on

compounds

cluck+cluck, knock+knock

post

post

post-modifiers

all, too

prep-uner

prep

combined prepositions

out_of, in_between

prep

prep

prepositions

under, minus

pro-dem

pro:dem

demonstrative pronouns

this, that

pro-indef

pro:indef

indefinite pronouns

everybody, few

pro-per

see file

personal pronouns

he, himself

pro-poss

pro-poss

possessive pronouns

hers, mine

pro-poss-det

pro:poss:det

possessive determiners

her, my

pro-wh

pro:wh

interrogative pronouns

who, what

quan

qn

quantifier

some, all, only, most

rel

rel

relativizers

that, which

small

inf, neg

small forms

not, to, xxx, yyy

v-aux

aux

auxiliaries

had, getting

v-baby

v

baby verbs

wee, poo

v-clit

v

cliticized forms

gonna, looka

v-cop

cop

copula

be, become

v-dup

v

verb duplications

eat+eat, drip+drip

v-irr

v

irregular verbs

came, beset, slept

v-mod-aux

mod:aux

modal auxiliaries

hafta, gotta

v-mod

mod

modals

can, ought

v

v

regular verbs

run, take, remember

v+adj+v

v

compounds

deep+fry, tippy+toe

v+n+v

v

compounds

bunny+hop, sleep+walk

v+v+conj+v

v

compounds

hide+and+seek

zero

0x

omitted words

0know, 0conj, 0n, 0is

 

The construction of these lexicon files involves a variety of decisions. Here are some of the most important issues to consider.

1.            Words may often appear in several files.  For example, virtually every noun in English can also function as a verb.  However, when this function is indicated by a suffix, as in “milking” the noun can be recognized as a verb through a process of morphological derivation contained in a rule in the cr.cut file.  In such cases, it is not necessary to list the word as a verb.  Of course, this process fails for unmarked verbs.  However, it is generally not a good idea to represent all nouns as verbs, since this tends to overgenerate ambiguity.  Instead, it is possible to use the POSTMORTEM program to detect cases where nouns are functioning as bare verbs. 

2.            If a word can be analyzed morphologically, it should not be given a full listing.  For example, since “coworker” can be analyzed by MOR into three morphemes as co#n:v|work-AGT, it should not be separately listed in the n.cut file.  If it is, then POST will not be able to distinguish co#n:v|work-AGT from n|coworker.

3.            In the zero.cut file, possible omitted words are listed without the preceding 0.  For example, there is an entry for “conj” and “the”.  However, in the transcript, these would be represented as “0conj” and “0the”.

4.            It is always best to use spaces to break up word sequences that are just combinations of words.  For example, instead of transcribing 1964 as “nineteen+sixty+four”, “nineteen-sixty-four”, or “nineteen_sixty_four”, it is best to transcribe simply as “nineteen sixty four”.  This principle is particularly important for Chinese, where there is a tendency to underutilize spaces, since Chinese itself is written without spaces.

5.            For most languages that use Roman characters, you can rely on capitalization to force MOR to treat words as proper nouns.  To understand this, take a look at the forms in the sf.cut file at the top of the MOR directory.  These various entries tell MOR how to process forms like k@l for the letter “k” or John_Paul_Jones for the famous admiral.  The symbol \c indicates that a form is capitalized and the symbol \l indicates that it is lowercase.

4.7      MOR Grammatical Categories

In addition to the various part-of-speech categories provided by the lexicon, MOR also inserts a series of grammatical categories, based on the information about affixes in the 0affix.cut file, as well as information inserted by the a-rules and c-rules.  If the category is regularly attached, it is preceded by a dash.  If it is irregular, it uses an amerpsand. For English, the inflectional categories are:

 

Abbreviation

Meaning

Example

Analysis

PL

nominal plural

cats

n|cat-PL

PAST

past tense

pulled

v|pull-PAST

PRESP

present participle

pulling

v|pull-PRESP

PASTP

past participle

broken

v|break-PASTP

PRES

present

am

cop|be&1S&PRES

1S

first singular

am

cop|be&1S&PRES

3S

third singular present

is

cop|be&3S&PRES

13S

first and third

was

cop|be&PAST&13S

 

In addition to these inflectional categories, English uses these derivational morphemes:

 

Abbreviation

Meaning

Example

Analysis

CP

comparative

stronger

adj|strong-CP

SP

superlative

strongest

adj|strong-SP

AGT

agent

runner

n|run&dv-AGT

DIM

diminutive

doggie

n|dog-DIM

FUL

denominal

hopeful

adj|hope&dn-FULL

NESS

deadjectival

goodness

n|good&dadj-NESS

ISH

denominal

childish

adj|child&dn-ISH

ABLE

deverbal

likeable

adj|like&dv-ABLE

LY

deadjectival

happily

adj|happy&dadj-LY

Y

deverbal, denominal

sticky

adj|stick&dn-Y

 

In these examples, the features dn, dv, and dadj indicate derivation of the forms from nouns, verbs, or adjectives.

 

Other languages use many of these same features, but with many additional ones, particularly for highly inflecting languages.  Sometimes these are lowercase and sometimes upper.  Here are some examples:

 

Affix

Meaning

Affix

Meaning

Affix

Meaning

KONJ

subjunctive

ADV

adverbial

m

masculine

SUB

subjunctive

SG

singular

f

feminine

COND

conditional

PL

plural

AUG

augmentative

NOM

nominative

IMP

imperative

PROG

progressive

ACC

accusative

IMPF

imperfective

PRET

preterite

DAT

dative

FUT

future

 

 

GEN

genitive

PASS

passive

 

 

 

4.8      Compounds and Complex Forms

The lexical files include many special compound files such as n+n+n.cut or v+n+v.cut. Compounds are listed in the lexical files according to both their overall part of speech (X-bar) and the parts of speech of their components.  However, there are seven types of complex word combinations that should not be treated as compounds.

  1. Underscored words.  The n-under.cut file includes 40 forms that resemble compounds, but which are best viewed as units with non-morphemic components.  For example, kool_aid and band_aid are not analytic combinations of morphemes, although they clearly have two components.  The same is true for hi_fi and coca_cola.  In general, MOR and CLAN pay little attention to the underscore character, so it can be used as needed when a plus for compounding is not appropriate. The underscore mark is particularly useful for representing the combinations of words found in proper nouns such as John_Paul_Jones, Columbia_University, or The_Beauty_and_the_Beast.  If these words are capitalized, they do not need to be included in the MOR lexicon, since all capitalized words are taken as proper nouns in English.  However, these forms cannot contain pluses, since compounds are not proper nouns.  And please be careful not to overuse this form.
  2. Separate words.  Many noun-noun combinations in English should just be written out as separate words.  An example would be “faucet stem assembly rubber gasket holder”. It is worth noting here that German treats all such forms as single words. This means that different conventions have to be adopted for German in order to avoid the need for exhaustive listing of the infinite number of German compound nouns.
  3. Spelling sequences.  Sequences of letter names such as “O-U-T” for the spelling of “out” are transcribed with the suffix @k, as in out@k.
  4. Acronyms. Forms such as FBI are transcribed with underscores, as in F_B_I.  Presence of the initial capital letter tells MOR to treat F_B_I as a proper noun. This same format is used for non-proper abbreviations such as c_d or d_v_d. 
  5. Products.  Coming up with good forms for commercial products such as Coca-Cola is tricky.  Because of the need to ban the use of the dash on the main line, we have avoided the use of the dash in these names.  They should not be treated as compounds, as in coca+cola, and compounds cannot be capitalized, so Coca+Cola is not possible.  This leaves us with the option of either coca_cola or Coca_Cola.  The option coca_cola seems best, since this is not a proper noun.
  6. Babbling and word play.  In earlier versions of CHAT and MOR, transcribers often represent sequences of babbling or word play syllables as compounds.  This was done mostly because the plus provides a nice way of separating out the separate syllables in these productions.  To make it clear that these separations are simply marked for purposes of syllabification, we now ask transcribers to use forms such as ba^ba^ga^ga@wp or choo^bung^choo^bung@o to represent these patterns.

The introduction of this more precise system for transcription of complex forms opens up additional options for programs like MLU, KWAL, FREQ, and GRASP.  For MLU, compounds will be counted as single words, unless the plus sign is added to the morpheme delimiter set using the +b+ option switch.  For GRASP, processing of compounds only needs to look at the overall part of speech of the compound, since the internal composition of the compound is not relevant to the syntax.  Additionally, forms such as "faucet handle valve washer assembly" do not need to be treated as compounds, since GRASP can learn to treat sequences of nouns as complex phrases header by the final noun. 

4.9      Errors and Replacements

Transcriptions on the main line have to serve two, sometimes conflicting (Edwards, 1992), functions.  On the one hand, they need to represent the form of the speech as actually produced.  On the other hand, they need to provide input that can be used for morphosyntactic analysis.  When words are pronounced in their standard form, these two functions are in alignment.  However, when words are pronounced with phonological or morphological errors, it is important to separate out the actual production from the morphological target.  This can be done through a system for main line tagging of errors.  This system largely replaces the coding of errors on a separate %err line, although that form is still available, if needed.  The form of the newer system is illustrated here:

 

*CHI:  him [* case] ated [: ate] [* +ed-sup] a f(l)ower and a pun [: bun].

 

For the first error, there is no need to provide a replacement, since MOR can process “him” as a standard pronoun.  However, since the second word is not a real word form, the replacement is necessary in order to tell MOR how to process the form.  The third error is just an omission of “l” from the cluster and the final error is a mispronunciation of the initial consonant. Phonological errors are not coded here, since that level of analysis is best conducted inside the Phon program (Rose et al., 2005).

4.10  Affixes

The inflectional and derivational affixes of English are listed in the 0affix.cut file. 

1.     This file begins with a list of prefixes such as “mis” and “semi” that attach either to nouns or verbs. Each prefix also has a permission feature, such as [allow mis].  This feature only comes into play when a noun or verb in n.cut or v.cut also has the feature [pre no].  For example, the verb “test” has the feature [pre no] included in order to block prefixing with “de-” to produce “detest” which is not a derivational form of "test".  At the same time, we want to permit prefixing with “re-”, the entry for “test” has [pre no][allow re].  Then, when the relevant rule in cr.cut sees a verb following “re-” it checks for a match in the [allow] feature and allows the attachment in this case.

2.     Next we see some derivational suffixes such as diminutive –ie or agential –er.  Unlike the prefixes, these suffixes often change the spelling of the stem by dropping silent e or doubling final consonants.  The ar.cut file controls this process, and the [allo x] features listed there control the selection of the correct form of the suffix.

3.     Each suffix is represented by a grammatical category in parentheses.  These categories are taken from a typologically valid list given in the CHAT Manual.

4.     Each suffix specifies the grammatical category of the form that will result after its attachment.  For suffixes that change the part of speech, this is given in the scat, as in [scat adj:n].  Prefixes do not change parts of speech, so they are simply listed as [scat pfx] and use the [pcat x] feature to specify the shape of the forms to which they can attach.

5.     The long list of suffixes concludes with a list of cliticized auxiliaries and reduced main verbs.  These forms are represented in English as contractions.  Many of these forms are multiply ambiguous and it will be the job of POST to choose the correct reading from among the various alternatives.

4.11  Control Features and Output Features

The lexical files include several control features that specify how stems should be treated.  One important set includes the [comp x+x] features for compounds. This feature controls how compounds will be unpacked for formatting on the %mor line.  Irregular adjectives in adj-ir.cut have features specifying their degree as comparative or superlative. Irregular nouns have features controlling the use of the plural.  Irregular verbs have features controlling consonant doubling [gg +] and the formation of the perfect tense. Features like [block ed] are used to prevent reocognition of overregularized forms such as goed.

There are also a variety of features that are included in lexical entries, but not necessarily present in the final output.  For example, the feature of gender is used to determine patterns of suffixation in Spanish, but to include this feature in the output it must be present and not commented in the output.cut file.  Other lexical features of this type include root, ptn, num, tense, and deriv.

5       Correcting errors

When running mor on a new set of chat files, it is important to make sure that mor will be able to recognize all the words in these files.  A first step in this process involves running the CHECK program to see if all the words follow basic CHAT rules, such as not including numbers or capital letters in the middle of words. There are several common reasons for a word not being recognized:

1.     It is misspelled.  If you have doubts about the spellings of certain words, you can look in the 0allwords.cdc file this is included in the /lex folder for each language.  The words there are listed in alphabetical order.

2.     The word should be preceded by and ampersand & to block look up through MOR. There are four forms using the ampersand.  Nonwords just take the & alone, as in &gaga.  Incomplete words should be transcribed as &+text, as in &+sn for the beginning of snake.  Filler words should be transcribed as &-uh. Finally, sounds like laughing can be transcribed as &=laughs, as described more extensively in the CHAT manual.

3.     The word should have been transcribed with a special form marker, as in bobo@o or bo^bo@o for onomatopoeia.  It is impossible to list all possible onomatopoeic forms in the MOR lexicon, so the @o marker solves this problem by telling MOR how to treat the form. This approach will be needed for other special forms, such as babbling, word play, and so on.

4.     The word was transcribed in “eye-dialect” to represent phonological reductions.  When this is done, there are two basic ways to allow MOR to achieve correct lookup. If the word can be transcribed with parentheses for the missing material, as in “(be)cause”, then MOR will be happy.  This method is particularly useful in Spanish and German.  Alternatively, if there is a sound substitution, then you can transcribe using the [: text] replacement method, as in “pittie [: kittie]”.

5.     You should treat the word as a proper noun by capitalizing the first letter.  This method works for many languages, but not in German where all nouns are capitalized and not in Asian l