Word sense disambiguation
Encyclopedia
In computational linguistics
Computational linguistics
Computational linguistics is an interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective....

, word-sense disambiguation (WSD) is an open problem
Open problem
In science and mathematics, an open problem or an open question is a known problem that can be accurately stated, and has not yet been solved . Some questions remain unanswered for centuries before solutions are found...

 of natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

, which governs the process of identifying which sense
Word sense
In linguistics, a word sense is one of the meanings of a word.For example a dictionary may have over 50 different meanings of the word , each of these having a different meaning based on the context of the word usage in a sentence...

 of a word (i.e. meaning
Meaning (linguistics)
In linguistics, meaning is what is expressed by the writer or speaker, and what is conveyed to the reader or listener, provided that they talk about the same thing . In other words if the object and the name of the object and the concepts in their head are the same...

) is used in a sentence
Sentence (linguistics)
In the field of linguistics, a sentence is an expression in natural language, and often defined to indicate a grammatical unit consisting of one or more words that generally bear minimal syntactic relation to the words that precede or follow it...

, when the word has multiple meanings (polysemy
Polysemy
Polysemy is the capacity for a sign or signs to have multiple meanings , i.e., a large semantic field.Charles Fillmore and Beryl Atkins’ definition stipulates three elements: the various senses of a polysemous word have a central origin, the links between these senses form a network, and ...

). The solution to this problem impacts other computer-related writing, such as discourse
Discourse
Discourse generally refers to "written or spoken communication". The following are three more specific definitions:...

, improving relevance of search engine
Search engine
A search engine is an information retrieval system designed to help find information stored on a computer system. The search results are usually presented in a list and are commonly called hits. Search engines help to minimize the time required to find information and the amount of information...

s, anaphora resolution, coherence
Coherence (linguistics)
Coherence in linguistics is what makes a text semantically meaningful.It is especially dealt with in text linguistics. Coherence is achieved through syntactical features such as the use of deictic, anaphoric and cataphoric elements or a logical tense structure, as well as presuppositions and...

, inference
Inference
Inference is the act or process of deriving logical conclusions from premises known or assumed to be true. The conclusion drawn is also called an idiomatic. The laws of valid inference are studied in the field of logic.Human inference Inference is the act or process of deriving logical conclusions...

 et cetera.

Research has progressed steadily to the point where WSD systems achieve sufficiently high levels of accuracy on a variety of word types and ambiguities. A rich variety of techniques have been researched, from dictionary-based methods that use the knowledge encoded in lexical resources, to supervised machine learning
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...

 methods in which a classifier is trained for each distinct word on a corpus of manually sense-annotated examples, to completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

s to date.

Current accuracy is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph
Homograph
A homograph is a word or a group of words that share the same written form but have different meanings. When spoken, the meanings may be distinguished by different pronunciations, in which case the words are also heteronyms. Words with the same writing and pronunciation A homograph (from the ,...

) level is routinely above 90%, with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in recent evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.

About

A disambiguation process requires two strict things: a dictionary
Dictionary
A dictionary is a collection of words in one or more specific languages, often listed alphabetically, with usage information, definitions, etymologies, phonetics, pronunciations, and other information; or a book of words in one language with their equivalents in another, also known as a lexicon...

 to specify the senses which are to be disambiguated and a corpus
Corpus linguistics
Corpus linguistics is the study of language as expressed in samples or "real world" text. This method represents a digestive approach to deriving a set of abstract rules by which a natural language is governed or else relates to another language. Originally done by hand, corpora are now largely...

 of language
Language
Language may refer either to the specifically human capacity for acquiring and using complex systems of communication, or to a specific instance of such a system of complex communication...

 data to be disambiguated (in some methods, a training corpus
Training set
A training set is a set of data used in various areas of information science to discover potentially predictive relationships. Training sets are used in artificial intelligence, machine learning, genetic programming, intelligent systems, and statistics...

 of language examples is also required). WSD task has two variants: "lexical sample" and "all words" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. The latter is deemed a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word.

To give a hint how all this works, consider two examples of the distinct senses that exist for the (written) word "bass":
  1. a type of fish
  2. tones of low frequency

and the sentences:
  1. I went fishing for some sea bass.
  2. The bass line of the song is too weak.


To a human, it is obvious that the first sentence is using the word "bass (fish)
Bass (fish)
Bass is a name shared by many different species of popular gamefish. The term encompasses both freshwater and marine species. All belong to the large order Perciformes, or perch-like fishes, and in fact the word bass comes from Middle English bars, meaning "perch."-Types of basses:*The temperate...

", as in the former sense above and in the second sentence, the word "bass (instrument)
Bass (instrument)
Bass describes musical instruments that produce tones in the low-pitched range. They belong to different families of instruments and can cover a wide range of musical roles...

" is being used as in the latter sense below. Developing algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

s to replicate this human ability can often be a difficult task, as is further exemplified by the implicit equivocation between "bass (sound)" and "bass" (musical instrument).

History

WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics. Warren Weaver
Warren Weaver
Warren Weaver was an American scientist, mathematician, and science administrator...

, in his famous 1949 memorandum on translation, first introduced the problem in a computational context. Early researchers understood well the significance and difficulty of WSD. In fact, Bar-Hillel
Yehoshua Bar-Hillel
Yehoshua Bar-Hillel was an Israeli philosopher, mathematician, and linguist at the Hebrew University of Jerusalem, best known for his pioneering work in machine translation and formal linguistics.- Biography :...

 (1960) used the above example to argue that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge.

In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, but since WSD systems were largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck.

By the 1980s large-scale lexical resources, such as the Oxford Advanced Learner's Dictionary of Current English
Advanced Learner's Dictionary
The advanced learner's dictionary is the most common type of monolingual learner's dictionary, that is, a dictionary written for someone who is learning a foreign language and who has a proficiency level of B2 or above according to the Common European Framework...

 (OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based.

In the 1990s, the statistical revolution swept through computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques.

The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses, domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.

Differences between dictionaries

One problem with word sense disambiguation is deciding what the senses are. In cases like the word bass above, at least some senses are obviously different. In other cases, however, the different senses can be closely related (one meaning being a metaphor
Metaphor
A metaphor is a literary figure of speech that uses an image, story or tangible thing to represent a less tangible thing or some intangible quality or idea; e.g., "Her eyes were glistening jewels." Metaphor may also be used for any rhetorical figures of speech that achieve their effects via...

ical or metonymic
Metonymy
Metonymy is a figure of speech used in rhetoric in which a thing or concept is not called by its own name, but by the name of something intimately associated with that thing or concept...

 extension of another), and in such cases division of words into senses becomes much more difficult. Different dictionaries
Dictionary
A dictionary is a collection of words in one or more specific languages, often listed alphabetically, with usage information, definitions, etymologies, phonetics, pronunciations, and other information; or a book of words in one language with their equivalents in another, also known as a lexicon...

 and thesaurus
Thesaurus
A thesaurus is a reference work that lists words grouped together according to similarity of meaning , in contrast to a dictionary, which contains definitions and pronunciations...

es will provide different divisions of words into senses. One solution some researchers have used is to choose a particular dictionary, and just use its set of senses. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones. However, given the lack of a full-fledged coarse-grained sense inventory, most researchers continue to work on fine-grained WSD.

Most research in the field of WSD is performed by using WordNet
WordNet
WordNet is a lexical database for the English language. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets...

 as a reference sense inventory for English. WordNet is a computational lexicon
Lexicon
In linguistics, the lexicon of a language is its vocabulary, including its words and expressions. A lexicon is also a synonym of the word thesaurus. More formally, it is a language's inventory of lexemes. Coined in English 1603, the word "lexicon" derives from the Greek "λεξικόν" , neut...

 that encodes concepts as synonym
Synonym
Synonyms are different words with almost identical or similar meanings. Words that are synonyms are said to be synonymous, and the state of being a synonym is called synonymy. The word comes from Ancient Greek syn and onoma . The words car and automobile are synonyms...

 sets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes include Roget's Thesaurus
Roget's Thesaurus
Roget's Thesaurus is a widely-used English language thesaurus, created by Dr. Peter Mark Roget in 1805 and released to the public on 29 April 1852. The original edition had 15,000 words, and each new edition has been larger...

 and Wikipedia
Wikipedia
Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Its 20 million articles have been written collaboratively by volunteers around the world. Almost all of its articles can be edited by anyone with access to the site,...

.

Part-of-speech tagging

In any real test, part-of-speech tagging
Part-of-speech tagging
In corpus linguistics, part-of-speech tagging , also called grammatical tagging or word-category disambiguation, is the process of marking up a word in a text as corresponding to a particular part of speech, based on both its definition, as well as its context—i.e...

 and sense tagging are very closely related with each potentially making constraints to the other. And the question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEval
SemEval
SemEval is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language...

 competitions parts of speech are provided as input for the text to disambiguate).

It is instructive to compare the word sense disambiguation problem with the problem of part-of-speech tagging. Both involve disambiguating or tagging with words, be it with senses or parts of speech. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate
Success rate
Success rate is the fraction or percentage of success among a number of attempts, and may refer to:* Opportunity success rate* When success refers to attempts to induce pregnancy, then pregnancy rate is used**Artificial insemination success rates...

 for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 95% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning
Supervised learning
Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...

. These figures are typical for English, and may be very different from those for other languages.

Inter-judge variance

Another problem is inter-judge
Inter-rater reliability
In statistics, inter-rater reliability, inter-rater agreement, or concordance is the degree of agreement among raters. It gives a score of how much homogeneity, or consensus, there is in the ratings given by judges. It is useful in refining the tools given to human judges, for example by...

 variance
Variance
In probability theory and statistics, the variance is a measure of how far a set of numbers is spread out. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean . In particular, the variance is one of the moments of a distribution...

. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses is far more difficult. While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.

Thus, a computer cannot be expected to give better performance on such a task than a human (indeed, since the human serves as the standard, the computer being better than the human is incoherent), so the human performance serves as an upper bound
Upper bound
In mathematics, especially in order theory, an upper bound of a subset S of some partially ordered set is an element of P which is greater than or equal to every element of S. The term lower bound is defined dually as an element of P which is lesser than or equal to every element of S...

. Human performance, however, is much better on coarse-grained than fine-grained distinctions, so this again is why research on coarse-grained distinctions has been put to test in recent WSD evaluation exercises.

Common sense

Some AI
Ai
AI, A.I., Ai, or ai may refer to:- Computers :* Artificial intelligence, a branch of computer science* Ad impression, in online advertising* .ai, the ISO Internet 2-letter country code for Anguilla...

 researchers like Douglas Lenat
Douglas Lenat
Douglas B. Lenat is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence, especially machine learning , knowledge representation, blackboard systems, and "ontological engineering"...

 argue that one cannot parse meanings from words without some form of common sense ontology.
For example, comparing these two sentences:
  • "Jill and Mary are sisters." – (they are sisters of each other).
  • "Jill and Mary are mothers." – (each is independently a mother).

To properly identify senses of words one must know common sense facts. Moreover, sometimes the common sense is needed to disambiguate such words like pronouns in case of having anaphora
Anaphora (linguistics)
In linguistics, anaphora is an instance of an expression referring to another. Usually, an anaphoric expression is represented by a pro-form or some other kind of deictic--for instance, a pronoun referring to its antecedent...

s or cataphoras in the text.

Sense inventory and algorithms' task-dependency

A task-independent sense inventory is not a coherent concept: each task requires its own division of word meaning into senses relevant to the task. For example, the ambiguity of 'mouse
Mouse
A mouse is a small mammal belonging to the order of rodents. The best known mouse species is the common house mouse . It is also a popular pet. In some places, certain kinds of field mice are also common. This rodent is eaten by large birds such as hawks and eagles...

' (animal or device) is not relevant in English-French machine translation
Machine translation
Machine translation, sometimes referred to by the abbreviation MT is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another.On a basic...

, but is relevant in information retrieval
Information retrieval
Information retrieval is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web...

. The opposite is true of 'river', which requires a choice in French (fleuve 'flows into the sea', or rivière 'flows into a river').

Also, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. Here the "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language (bank could translate to French banque 'financial bank' or rive 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.

Discreteness of senses

Finally, the very notion of "word sense
Word sense
In linguistics, a word sense is one of the meanings of a word.For example a dictionary may have over 50 different meanings of the word , each of these having a different meaning based on the context of the word usage in a sentence...

" is slippery and controversial. Most people can agree in distinctions at the coarse-grained homograph
Homograph
A homograph is a word or a group of words that share the same written form but have different meanings. When spoken, the meanings may be distinguished by different pronunciations, in which case the words are also heteronyms. Words with the same writing and pronunciation A homograph (from the ,...

 level (e.g., pen as writing instrument or enclosure), but go down one level to fine-grained polysemy
Polysemy
Polysemy is the capacity for a sign or signs to have multiple meanings , i.e., a large semantic field.Charles Fillmore and Beryl Atkins’ definition stipulates three elements: the various senses of a polysemous word have a central origin, the links between these senses form a network, and ...

, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences. Word meaning is in principle infinitely variable and context sensitive. It does not divide up easily into distinct or discrete sub-meanings. Lexicographers
Lexicography
Lexicography is divided into two related disciplines:*Practical lexicography is the art or craft of compiling, writing and editing dictionaries....

 frequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable in computational applications, as the decisions of lexicographers are usually driven by other considerations. Recently, a task – named lexical substitution
Lexical substitution
Lexical substitution is the task of identifying a substitute for a word in the context of a sentence. For instance, given the following sentence: "After the match, replace any remaining fluid deficit to prevent chronic dehydration throughout the tournament", a substitute of game might be...

 – has been proposed as a possible solution to the sense discreteness problem. The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness).

Approaches and methods

As in all natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

, there are two main approaches to WSD – deep approaches and shallow approaches.

Deep approaches presume access to a comprehensive body of world knowledge
Commonsense knowledge bases
In artificial intelligence research, commonsense knowledge is the collection of facts and information that an ordinary person is expected to know...

. Knowledge, such as "you can go fishing for a type of fish, but not for low frequency sounds" and "songs have low frequency sounds as parts, but not types of fish", is then used to determine in which sense the word is used. These approaches are not very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside of very limited domains. However, if such knowledge did exist, then deep approaches would be much more accurate than the shallow approaches. Also, there is a long tradition in computational linguistics
Computational linguistics
Computational linguistics is an interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective....

, of trying such approaches in terms of coded knowledge and in some cases, it is hard to say clearly whether the knowledge involved is linguistic or world knowledge. The first attempt was that by Margaret Masterman
Margaret Masterman
Margaret Masterman was a British linguist and philosopher, most known for her pioneering work in the field of computational linguistics and especially machine translation.- Biography :...

 and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.

Shallow approaches don't try to understand the text. They just consider the surrounding words, using information such as "if bass has words sea or fishing nearby, it probably is in the fish sense; if bass has the words music or song nearby, it is probably in the music sense." These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. However, it can be confused by sentences like The dogs bark at the tree which contains the word bark near both tree and dogs.

There are four conventional approaches to WSD:
  • Dictionary
    Machine-readable dictionary
    Machine-readable dictionary is a dictionary stored as machine data instead of being printed on paper. It is an electronic dictionary and lexical database....

    - and knowledge-based methods: These rely primarily on dictionaries, thesauri, and lexical knowledge bases, without using any corpus evidence.
  • Supervised methods
    Supervised learning
    Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...

    : These make use of sense-annotated corpora to train from.
  • Semi-supervised or minimally-supervised methods
    Semi-supervised learning
    In computer science, semi-supervised learning is a class of machine learning techniques that make use of both labeled and unlabeled data for training - typically a small amount of labeled data with a large amount of unlabeled data...

    : These make use of a secondary source of knowledge such as a small annotated corpus as seed data in a bootstrapping process, or a word-aligned bilingual corpus.
  • Unsupervised methods
    Unsupervised learning
    In machine learning, unsupervised learning refers to the problem of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution...

    : These eschew (almost) completely external information and work directly from raw unannotated corpora. These methods are also known under the name of word sense discrimination.


Almost all these approaches normally work by defining a window of n content words around each word to be disambiguated in the corpus, and statistically analyzing those n surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifier
Naive Bayes classifier
A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem with strong independence assumptions...

s and decision tree
Decision tree
A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm. Decision trees are commonly used in operations research, specifically...

s. In recent research, kernel-based methods
Kernel methods
In computer science, kernel methods are a class of algorithms for pattern analysis, whose best known elementis the support vector machine...

 such as support vector machine
Support vector machine
A support vector machine is a concept in statistics and computer science for a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis...

s have shown superior performance in supervised learning
Supervised learning
Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...

. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art.

Dictionary- and knowledge-based methods

The Lesk algorithm
Lesk algorithm
The Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986.-Overview:The Lesk algorithm is based on the assumption that words in a given neighbourhood will tend to share a common topic...

 is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary).

An alternative to the use of the definitions is to consider general word-sense relatedness and to compute the semantic similarity
Semantic similarity
Semantic similarity or semantic relatedness is a concept whereby a set of documents or terms within term lists are assigned a metric based on the likeness of their meaning / semantic content....

 of each pair of word senses based on a given lexical knowledge base such as WordNet. Graph-based
Graph (mathematics)
In mathematics, a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges...

 methods reminiscent of spreading activation
Spreading activation
Spreading activation is a method for searching associative networks, neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to...

 research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods or even outperforming them on specific domains. Recently, it has been reported that simple graph connectivity measures
Connectivity (graph theory)
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements which need to be removed to disconnect the remaining nodes from each other. It is closely related to the theory of network flow problems...

, such as degree
Degree (graph theory)
In graph theory, the degree of a vertex of a graph is the number of edges incident to the vertex, with loops counted twice. The degree of a vertex v is denoted \deg. The maximum degree of a graph G, denoted by Δ, and the minimum degree of a graph, denoted by δ, are the maximum and minimum degree...

, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base. Also, automatically transferring knowledge
Knowledge
Knowledge is a familiarity with someone or something unknown, which can include information, facts, descriptions, or skills acquired through experience or education. It can refer to the theoretical or practical understanding of a subject...

 in the form of semantic relations from Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.

The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).

Supervised methods

Supervised
Supervised learning
Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...

 methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence, world knowledge and reasoning are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as feature selection
Feature selection
In machine learning and statistics, feature selection, also known as variable selection, feature reduction, attribute selection or variable subset selection, is the technique of selecting a subset of relevant features for building robust learning models...

, parameter optimization, and ensemble learning
Ensemble learning
In statistics and machine learning, ensemble methods use multiple models to obtain better predictive performance than could be obtained from any of the constituent models....

. Support Vector Machines and memory-based learning have been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create.

Semi-supervised methods

Because of the lack of training data, many word sense disambiguation algorithms use semi-supervised learning
Semi-supervised learning
In computer science, semi-supervised learning is a class of machine learning techniques that make use of both labeled and unlabeled data for training - typically a small amount of labeled data with a large amount of unlabeled data...

, which allows both labeled and unlabeled data. The Yarowsky algorithm
Yarowsky algorithm
In computational linguistics the Yarowsky algorithm is an unsupervised learning algorithm for word sense disambiguation that uses the "one sense per collocation" and the "one sense per discourse" properties of human languages for word sense disambiguation...

 was an early example of such an algorithm. It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.

The bootstrapping
Bootstrapping
Bootstrapping or booting refers to a group of metaphors that share a common meaning: a self-sustaining process that proceeds without external help....

 approach starts from a small amount of seed data for each word: either manually-tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial classifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached.

Other semi-supervised techniques use large quantities of untagged corpora to provide co-occurrence
Co-occurrence
Co-occurrence or cooccurrence can either mean concurrence / coincidence or, in a more specific sense, the above-chance frequent occurrence of two terms from a text corpus alongside each other in a certain order. Co-occurrence in this linguistic sense can be interpreted as an indicator of semantic...

 information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains.

Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned bilingual corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.

Unsupervised methods

Unsupervised learning
Unsupervised learning
In machine learning, unsupervised learning refers to the problem of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution...

 is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by clustering word occurrences using some measure of similarity of context, a task referred to as word sense induction
Word sense induction
In computational linguistics, word-sense induction or discrimination is an open problem of natural language processing, which concerns the automatic identification of the senses of a word...

 or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than other methods, above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a mapping
Map (mathematics)
In most of mathematics and in some related technical fields, the term mapping, usually shortened to map, is either a synonym for function, or denotes a particular kind of function which is important in that branch, or denotes something conceptually similar to a function.In graph theory, a map is a...

 to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists. It is hoped that unsupervised learning will overcome the knowledge acquisition bottleneck because they are not dependent on manual effort.

Other approaches

Other approaches may vary differently in their methods:
  • Identification of dominant word senses;
  • Domain-driven disambiguation;
  • WSD using Cross-Lingual Evidence.

Local impediments and summary

The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem. Unsupervised methods rely on knowledge about word senses, which is barely formulated in dictionaries and lexical databases. Supervised methods depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the Senseval exercises.

Therefore, one of the most promising trends in WSD research is using the largest corpus
Corpus linguistics
Corpus linguistics is the study of language as expressed in samples or "real world" text. This method represents a digestive approach to deriving a set of abstract rules by which a natural language is governed or else relates to another language. Originally done by hand, corpora are now largely...

 ever accessible, the World Wide Web
World Wide Web
The World Wide Web is a system of interlinked hypertext documents accessed via the Internet...

, to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as information retrieval
Information retrieval
Information retrieval is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web...

 (IR). In this case, however, the reverse is also true: Web search engines implement simple and robust IR techniques that can be successfully used when mining the Web for information to be employed in WSD. Therefore, the lack of training data provoked appearing some new algorithms and techniques described here:

External knowledge sources

Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be classified as follows:
  1. Structured:
    • Thesauri
    • Machine-readable dictionaries
      Machine-readable dictionary
      Machine-readable dictionary is a dictionary stored as machine data instead of being printed on paper. It is an electronic dictionary and lexical database....

       (MRDs)
    • Ontologies
  2. Unstructured:
    • Corpora
      Text corpus
      In linguistics, a corpus or text corpus is a large and structured set of texts...

      : raw corpora and sense-annotated corpora
    • Collocation resources
      Collocation
      In corpus linguistics, collocation defines a sequence of words or terms that co-occur more often than would be expected by chance. In phraseology, collocation is a sub-type of phraseme. An example of a phraseological collocation is the expression strong tea...

    • Other resources (such as word frequency lists, stoplists, domain labels, etc.)

Evaluation

Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale, data set
Data set
A data set is a collection of data, usually presented in tabular form. Each column represents a particular variable. Each row corresponds to a given member of the data set in question. Its values for each of the variables, such as height and weight of an object or values of random numbers. Each...

s. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories.

In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized. Senseval (now renamed SemEval
SemEval
SemEval is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language...

) is an international word sense disambiguation competition, held every three years since 1998: Senseval-1 (1998), Senseval-2 (2001), Senseval-3 (2004), and its successor, SemEval (2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such as semantic role labeling
Semantic Role Labeling
Semantic role labeling is a task in natural language processing consisting of the detection of the semantic arguments associated with the predicate or verb of a sentence and their classification into their specific roles.-References:...

, gloss WSD, lexical substitution
Lexical substitution
Lexical substitution is the task of identifying a substitute for a word in the context of a sentence. For instance, given the following sentence: "After the match, replace any remaining fluid deficit to prevent chronic dehydration throughout the tournament", a substitute of game might be...

, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples).

Task Design Choices

Sense Inventories. During the first Senseval workshop the HECTOR sense inventory was adopted. The reason for adopting a previously-unknown sense inventory was mainly to avoid the use of popular fine-grained word senses (such as WordNet), which could make the experiments unfair or biased. However, given the lack of coverage of such inventories, since the second Senseval workshop the WordNet sense inventory has been adopted.

A set of testing words. Comparison of methods can be divided in 2 groups by amount of words to test. The difference consists in the amount of analysis and processing:
  • all-words task implies disambiguating all the words of the text
  • lexical sample consists in disambiguating some previously chosen target words.


It is assumed that the former one is more realistic evaluation, although with very laborious testing of results. Initially only the latter was used in evaluation but later the former was included.

Lexical sample organizers had to choose samples on which the systems were to be tested. A criticism of earlier forays into lexical-sample WSD evaluation is that the lexical sample had been chosen according to the whim of the experimenter (or, to coincide with earlier experimenters' selections). For English Senseval, a sampling frame was devised in which words were classified according to their frequency (in the BNC) and their polysemy level (in WordNet). Also, inclusion POS-tagging problem was a matter of discussion and it was decided that samples should be words with known part of speech and some indeterminants (for ex. 15 noun tasks, 13 verb tasks, 8 adjectives, and 5 indeterminates).

Baselines. For comparison purposes, known, yet simple, algorithms named baseline
Baseline
A baseline is a line that is a base for measurement or for construction; see datum or point of reference .The word baseline may refer to:...

s are used. These include different variants of Lesk algorithm
Lesk algorithm
The Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986.-Overview:The Lesk algorithm is based on the assumption that words in a given neighbourhood will tend to share a common topic...

 or most frequent sense algorithm.

Sense inventory. WSD exercises require a dictionary, to specify the word senses which are to be disambiguated, and a corpus of language data to be disambiguated. WordNet
WordNet
WordNet is a lexical database for the English language. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets...

 is the most popular example of sense inventory. The reason for adopting the HECTOR database during Senseval-1 was that the WordNet inventory was already publicly available.

Evaluation measures. During the evaluation of WSD systems two main performance measures are used:
  • Precision
    Precision and recall
    In pattern recognition and information retrieval, precision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance...

    : the fraction of system assignments made that are correct
  • Recall
    Precision and recall
    In pattern recognition and information retrieval, precision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance...

    : the fraction of total word instances correctly assigned by a system

If a system makes an assignment for every word, then precision and recall are the same, and can be called accuracy. This model has been extended to take into account systems that return a set of senses with weights for each occurrence.

Software

  • WordNet::SenseRelate, is a project that includes free open source systems for all words sense disambiguation and lexical sample sense disambiguation.

See also

  • Ambiguity
    Ambiguity
    Ambiguity of words or phrases is the ability to express more than one interpretation. It is distinct from vagueness, which is a statement about the lack of precision contained or available in the information.Context may play a role in resolving ambiguity...

  • Lesk algorithm
    Lesk algorithm
    The Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986.-Overview:The Lesk algorithm is based on the assumption that words in a given neighbourhood will tend to share a common topic...

  • Lexical substitution
    Lexical substitution
    Lexical substitution is the task of identifying a substitute for a word in the context of a sentence. For instance, given the following sentence: "After the match, replace any remaining fluid deficit to prevent chronic dehydration throughout the tournament", a substitute of game might be...

  • Part-of-speech tagging
    Part-of-speech tagging
    In corpus linguistics, part-of-speech tagging , also called grammatical tagging or word-category disambiguation, is the process of marking up a word in a text as corresponding to a particular part of speech, based on both its definition, as well as its context—i.e...

  • Polysemy
    Polysemy
    Polysemy is the capacity for a sign or signs to have multiple meanings , i.e., a large semantic field.Charles Fillmore and Beryl Atkins’ definition stipulates three elements: the various senses of a polysemous word have a central origin, the links between these senses form a network, and ...

  • Semeval
    SemEval
    SemEval is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language...

  • Syntactic ambiguity
    Syntactic ambiguity
    Syntactic ambiguity is a property of sentences which may be reasonably interpreted in more than one way, or reasonably interpreted to mean more than one thing...

  • Word sense
    Word sense
    In linguistics, a word sense is one of the meanings of a word.For example a dictionary may have over 50 different meanings of the word , each of these having a different meaning based on the context of the word usage in a sentence...

  • Word sense induction
    Word sense induction
    In computational linguistics, word-sense induction or discrimination is an open problem of natural language processing, which concerns the automatic identification of the senses of a word...


External links and suggested reading

  • Computational Linguistics Special Issue on Word Sense Disambiguation (1998)
  • Evaluation Exercises for Word Sense Disambiguation The de-facto standard benchmarks for WSD systems.
  • Roberto Navigli. Word Sense Disambiguation: A Survey, ACM Computing Surveys, 41(2), 2009, pp. 1–69. An up-to-date state of the art of the field.
  • Word Sense Disambiguation as defined in Scholarpedia
  • Word Sense Disambiguation: The State of the Art (PDF) A comprehensive overview By Prof. Nancy Ide & Jean Véronis (1998).
  • Word Sense Disambiguation Tutorial, by Rada Mihalcea and Ted Pedersen (2005).
  • Word Sense Disambiguation: Algorithms and Applications, edited by Eneko Agirre and Philip Edmonds (2006), Springer. Covers the entire field with chapters contributed by leading researchers. www.wsdbook.org site of the book
  • Bar-Hillel, Yehoshua. 1964. Language and Information. New York: Addison-Wesley.
  • Edmonds, Philip & Adam Kilgarriff. 2002. Introduction to the special issue on evaluating word sense disambiguation systems. Journal of Natural Language Engineering, 8(4):279-291.
  • Edmonds, Philip. 2005. Lexical disambiguation. The Elsevier Encyclopedia of Language and Linguistics, 2nd Ed., ed. by Keith Brown, 607-23. Oxford: Elsevier.
  • Ide, Nancy & Jean Véronis. 1998. Word sense disambiguation: The state of the art. Computational Linguistics, 24(1):1-40.
  • Jurafsky, Daniel & James H. Martin. 2000. Speech and Language Processing. New Jersey, USA: Prentice Hall.
  • Litkowski, K. C. 2005. Computational lexicons and dictionaries. In Encyclopaedia of Language and Linguistics (2nd ed.), K. R. Brown, Ed. Elsevier Publishers, Oxford, U.K., 753–761.
  • Manning, Christopher D. & Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press. http://nlp.stanford.edu/fsnlp/
  • Mihalcea, Rada. 2007. Word sense disambiguation. Encyclopedia of Machine Learning. Springer-Verlag.
  • Resnik, Philip and David Yarowsky. 2000. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation, Natural Language Engineering, 5(2):113-133. http://www.cs.jhu.edu/~yarowsky/pubs/nle00.ps
  • Yarowsky, David. 2001. Word sense disambiguation. Handbook of Natural Language Processing, ed. by Dale et al., 629-654. New York: Marcel Dekker.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK