Home > Articles > Web Services > Cloud Computing

Multilingual Natural Language Processing Applications: Finding the Structure of Words

  • Print
  • + Share This
  • 💬 Discuss
Learn how to identify words of distinct types in human languages, and how the internal structure of words can be modeled in connection with the grammatical properties and lexical concepts the words should represent.

Otakar Smrž and Hyun-Jo You

Human language is a complicated thing. We use it to express our thoughts, and through language, we receive information and infer its meaning. Linguistic expressions are not unorganized, though. They show structure of different kinds and complexity and consist of more elementary components whose co-occurrence in context refines the notions they refer to in isolation and implies further meaningful relations between them.

Trying to understand language en bloc is not a viable approach. Linguists have developed whole disciplines that look at language from different perspectives and at different levels of detail. The point of morphology, for instance, is to study the variable forms and functions of words, while syntax is concerned with the arrangement of words into phrases, clauses, and sentences. Word structure constraints due to pronunciation are described by phonology, whereas conventions for writing constitute the orthography of a language. The meaning of a linguistic expression is its semantics, and etymology and lexicology cover especially the evolution of words and explain the semantic, morphological, and other links among them.

Words are perhaps the most intuitive units of language, yet they are in general tricky to define. Knowing how to work with them allows, in particular, the development of syntactic and semantic abstractions and simplifies other advanced views on language. Morphology is an essential part of language processing, and in multilingual settings, it becomes even more important.

In this chapter, we explore how to identify words of distinct types in human languages, and how the internal structure of words can be modeled in connection with the grammatical properties and lexical concepts the words should represent. The discovery of word structure is morphological parsing.

How difficult can such tasks be? It depends. In many languages, words are delimited in the orthography by whitespace and punctuation. But in many other languages, the writing system leaves it up to the reader to tell words apart or determine their exact phonological forms. Some languages use words whose form need not change much with the varying context; others are highly sensitive about the choice of word forms according to particular syntactic and semantic constraints and restrictions.

1.1. Words and Their Components

Words are defined in most languages as the smallest linguistic units that can form a complete utterance by themselves. The minimal parts of words that deliver aspects of meaning to them are called morphemes. Depending on the means of communication, morphemes are spelled out via graphemes—symbols of writing such as letters or characters—or are realized through phonemes, the distinctive units of sound in spoken language.1 It is not always easy to decide and agree on the precise boundaries discriminating words from morphemes and from phrases [1, 2].

1.1.1. Tokens

Suppose, for a moment, that words in English are delimited only by whitespace and punctuation [3], and consider Example 1–1:

If we confront our assumption with insights from etymology and syntax, we notice two words here: newspaper and won’t. Being a compound word, newspaper has an interesting derivational structure. We might wish to describe it in more detail, once there is a lexicon or some other linguistic evidence on which to build the possible hypotheses about the origins of the word. In writing, newspaper and the associated concept is distinguished from the isolated news and paper. In speech, however, the distinction is far from clear, and identification of words becomes an issue of its own.

For reasons of generality, linguists prefer to analyze won’t as two syntactic words, or tokens, each of which has its independent role and can be reverted to its normalized form. The structure of won’t could be parsed as will followed by not. In English, this kind of tokenization and normalization may apply to just a limited set of cases, but in other languages, these phenomena have to be treated in a less trivial manner.

In Arabic or Hebrew [4], certain tokens are concatenated in writing with the preceding or the following ones, possibly changing their forms as well. The underlying lexical or syntactic units are thereby blurred into one compact string of letters and no longer appear as distinct words. Tokens behaving in this way can be found in various languages and are often called clitics.

In the writing systems of Chinese, Japanese [5], and Thai, whitespace is not used to separate words. The units that are delimited graphically in some way are sentences or clauses. In Korean, character strings are called eojeol ‘word segment’ and roughly correspond to speech or cognitive units, which are usually larger than words and smaller than clauses [6], as shown in Example 1–2:

Nonetheless, the elementary morphological units are viewed as having their own syntactic status [7]. In such languages, tokenization, also known as word segmentation, is the fundamental step of morphological analysis and a prerequisite for most language processing applications.

1.1.2. Lexemes

By the term word, we often denote not just the one linguistic form in the given context but also the concept behind the form and the set of alternative forms that can express it. Such sets are called lexemes or lexical items, and they constitute the lexicon of a language. Lexemes can be divided by their behavior into the lexical categories of verbs, nouns, adjectives, conjunctions, particles, or other parts of speech. The citation form of a lexeme, by which it is commonly identified, is also called its lemma.

When we convert a word into its other forms, such as turning the singular mouse into the plural mice or mouses, we say we inflect the lexeme. When we transform a lexeme into another one that is morphologically related, regardless of its lexical category, we say we derive the lexeme: for instance, the nouns receiver and reception are derived from the verb to receive.

Example 1–3 presents the problem of tokenization of didn’t and the investigation of the internal structure of anyone. In the paraphrase I saw no one, the lexeme to see would be inflected into the form saw to reflect its grammatical function of expressing positive past tense. Likewise, him is the oblique case form of he or even of a more abstract lexeme representing all personal pronouns. In the paraphrase, no one can be perceived as the minimal word synonymous with nobody. The difficulty with the definition of what counts as a word need not pose a problem for the syntactic description if we understand no one as two closely connected tokens treated as one fixed element.

In the Czech translation of Example 1–3, the lexeme vidět ‘to see’ is inflected for past tense, in which forms comprising two tokens are produced in the second and first person (i.e., viděla jsi ‘you-FEM-SG saw’ and neviděla jsem ‘I-FEM-SG did not see’). Negation in Czech is an inflectional parameter rather than just syntactic and is marked both in the verb and in the pronoun of the latter response, as in Example 1–4:

Here, vidělas is the contracted form of viděla jsi ‘you-FEM-SG saw’. The s of jsi ‘you are’ is a clitic, and due to free word order in Czech, it can be attached to virtually any part of speech. We could thus ask a question like Nikohos neviděla? ‘Did you see no one?’ in which the pronoun nikoho ‘no one’ is followed by this clitic.

1.1.3. Morphemes

Morphological theories differ on whether and how to associate the properties of word forms with their structural components [8, 9, 10, 11]. These components are usually called segments or morphs. The morphs that by themselves represent some aspect of the meaning of a word are called morphemes of some function.

Human languages employ a variety of devices by which morphs and morphemes are combined into word forms. The simplest morphological process concatenates morphs one by one, as in dis-agree-ment-s, where agree is a free lexical morpheme and the other elements are bound grammatical morphemes contributing some partial meaning to the whole word.

In a more complex scheme, morphs can interact with each other, and their forms may become subject to additional phonological and orthographic changes denoted as morphophonemic. The alternative forms of a morpheme are termed allomorphs.

Examples of morphological alternation and phonologically dependent choice of the form of a morpheme are abundant in the Korean language. In Korean, many morphemes change their forms systematically with the phonological context. Example 1–5 lists the allomorphs -ess-, -ass-, -yess- of the temporal marker indicating past tense. The first two alter according to the phonological condition of the preceding verb stem; the last one is used especially for the verb ha- ‘do’. The appropriate allomorph is merely concatenated after the stem, or it can be further contracted with it, as was -si-ess- into -syess- in Example 1–2. During morphological parsing, normalization of allomorphs into some canonical form of the morpheme is desirable, especially because the contraction of morphs interferes with simple segmentation:

Contractions (a, b) are ordinary but require attention because two characters are reduced into one. Other types (c, d, e) are phonologically unpredictable, or lexically dependent. For example, coh-ass- ‘have been good’ may never be contracted, whereas noh- and -ass- are merged into nwass- in (e).

There are yet other linguistic devices of word formation to account for, as the morphological process itself can get less trivial. The concatenation operation can be complemented with infixation or intertwining of the morphs, which is common, for instance, in Arabic. Nonconcatenative inflection by modification of the internal vowel of a word occurs even in English: compare the sounds of mouse and mice, see and saw, read and read.

Notably in Arabic, internal inflection takes place routinely and has a yet different quality. The internal parts of words, called stems, are modeled with root and pattern morphemes. Word structure is then described by templates abstracting away from the root but showing the pattern and all the other morphs attached to either side of it.

The meaning of Example 1–6 is similar to that of Example 1–1, only the phrase hādihi 007fig01.jpg refers to ‘these newspapers’. While 007fig02.jpg ‘you will read’ combines the future marker sa- with the imperfective second-person masculine singular verb 007fig03.jpg in the indicative mood and active voice, 007fig04.jpg ‘you will read it’ also adds the cliticized feminine singular personal pronoun in the accusative case.4

The citation form of the lexeme to which 007fig03.jpg ‘you-MASC-SG read’ belongs is qara, roughly ‘to read’. This form is classified by linguists as the basic verbal form represented by the template 007fig05.jpg merged with the consonantal root 007fig06.jpg, where the 007fig07.jpg symbols of the template are substituted by the respective root consonants. Inflections of this lexeme can modify the pattern 007fig08.jpg of the stem of the lemma into 007fig09.jpg and concatenate it, under rules of morphophonemic changes, with further prefixes and suffixes. The structure of 007fig03.jpgu is thus parsed into the template 007fig10.jpg and the invariant root.

The word 007fig11.jpg ‘the newspapers’ in the accusative case and definite state is another example of internal inflection. Its structure follows the template 007fig12.jpg with the root ğ r d. This word is the plural of 007fig13.jpg ‘newspaper’ with the template faīl-ah. The links between singular and plural templates are subject to convention and have to be declared in the lexicon.

Irrespective of the morphological processes involved, some properties or features of a word need not be apparent explicitly in its morphological structure. Its existing structural components may be paired with and depend on several functions simultaneously but may have no particular grammatical interpretation or lexical meaning.

The -ah suffix of ğarīdah ‘newspaper’ corresponds with the inherent feminine gender of the lexeme. In fact, the -ah morpheme is commonly, though not exclusively, used to mark the feminine singular forms of adjectives: for example, ğadīd becomes ğadīdah ‘new’. However, the -ah suffix can be part of words that are not feminine, and there its function can be seen as either emptied or overridden [12]. In general, linguistic forms should be distinguished from functions, and not every morph can be assumed to be a morpheme.

1.1.4. Typology

Morphological typology divides languages into groups by characterizing the prevalent morphological phenomena in those languages. It can consider various criteria, and during the history of linguistics, different classifications have been proposed [13, 14]. Let us outline the typology that is based on quantitative relations between words, their morphemes, and their features:

Isolating, or analytic, languages include no or relatively few words that would comprise more than one morpheme (typical members are Chinese, Vietnamese, and Thai; analytic tendencies are also found in English).

Synthetic languages can combine more morphemes in one word and are further divided into agglutinative and fusional languages.

Agglutinative languages have morphemes associated with only a single function at a time (as in Korean, Japanese, Finnish, and Tamil, etc.).

Fusional languages are defined by their feature-per-morpheme ratio higher than one (as in Arabic, Czech, Latin, Sanskrit, German, etc.).

In accordance with the notions about word formation processes mentioned earlier, we can also discern:

Concatenative languages linking morphs and morphemes one after another.

Nonlinear languages allowing structural components to merge nonsequentially to apply tonal morphemes or change the consonantal or vocalic templates of words.

While some morphological phenomena, such as orthographic collapsing, phonological contraction, or complex inflection and derivation, are more dominant in some languages than in others, in principle, we can find, and should be able to deal with, instances of these phenomena across different language families and typological classes.

  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus