Misplaced Pages

Word-sense disambiguation

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Word -sense disambiguation is the process of identifying which sense of a word is meant in a sentence or other segment of context . In human language processing and cognition , it is usually subconscious.

#623376

73-463: Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's neural networks , computer science has had a long-term challenge in developing the ability in computers to do natural language processing and machine learning . Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources, supervised machine learning methods in which

146-403: A classifier is trained for each distinct word on a corpus of manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful algorithms to date. Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at

219-429: A dictionary to specify the senses which are to be disambiguated and a corpus of language data to be disambiguated (in some methods, a training corpus of language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task

292-454: A neuronal network , is an interconnected population of neurons (typically containing multiple neural circuits ). Biological neural networks are studied to understand the organization and functioning of nervous systems . Closely related are artificial neural networks , machine learning models inspired by biological neural networks. They consist of artificial neurons , which are mathematical functions that are designed to be analogous to

365-422: A bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable in computational applications , as the decisions of lexicographers are usually driven by other considerations. In 2009,

438-432: A catalogue of a language's words (its wordstock); and a grammar , a system of rules which allow for the combination of those words into meaningful sentences. The lexicon is also thought to include bound morphemes , which cannot stand alone as words (such as most affixes ). In some analyses, compound words and certain classes of idiomatic expressions, collocations and other phrasemes are also considered to be part of

511-492: A comprehensive body of world knowledge . These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains. Additionally due to the long tradition in computational linguistics , of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt

584-460: A given lexical knowledge base such as WordNet . Graph-based methods reminiscent of spreading activation research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods or even outperforming them on specific domains. Recently, it has been reported that simple graph connectivity measures , such as degree , perform state-of-the-art WSD in

657-412: A language's lexicon. Neologisms are often introduced by children who produce erroneous forms by mistake. Other common sources are slang and advertising. There are two types of borrowings (neologisms based on external sources) that retain the sound of the source language material: The following are examples of external lexical expansion using the source language lexical item as the basic material for

730-437: A language's rules. For example, the suffix "-able" is usually only added to transitive verbs , as in "readable" but not "cryable". A compound word is a lexeme composed of several established lexemes, whose semantics is not the sum of that of their constituents. They can be interpreted through analogy , common sense and, most commonly, context . Compound words can have simple or complex morphological structures. Usually, only

803-520: A method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g. synsets in WordNet ) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind

SECTION 10

#1732854906624

876-424: A minimal description. To describe the size of a lexicon, lexemes are grouped into lemmas. A lemma is a group of lexemes generated by inflectional morphology . Lemmas are represented in dictionaries by headwords that list the citation forms and any irregular forms , since these must be learned to use the words correctly. Lexemes derived from a word by derivational morphology are considered new lemmas. The lexicon

949-559: A pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively. Other approaches may vary differently in their methods: The knowledge acquisition bottleneck

1022-516: A single pulse packet throughout the entire network. The connectivity of a neural network stems from its biological structures and is usually challenging to map out experimentally. Scientists used a variety of statistical tools to infer the connectivity of a network based on the observed neuronal activities, i.e., spike trains. Recent research has shown that statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances, providing deeper insights into

1095-557: A single vector representation, they still can be used to improve WSD. A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters. In addition to word-embedding techniques, lexical databases (e.g., WordNet , ConceptNet , BabelNet ) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend and Most Suitable Sense Annotation (MSSA). In AutoExtend, they present

1168-433: A small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial classifier , using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on

1241-431: A successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached. Other semi-supervised techniques use large quantities of untagged corpora to provide co-occurrence information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains. Also, an ambiguous word in one language

1314-684: A task referred to as word sense induction or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a mapping to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing

1387-458: A task – named lexical substitution – was proposed as a possible solution to the sense discreteness problem. The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness). There are two main approaches to WSD – deep approaches and shallow approaches. Deep approaches presume access to

1460-504: A thesaurus method in the 1990s. Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. There are four conventional approaches to WSD: Almost all these approaches work by defining

1533-444: A window of n content words around each word to be disambiguated in the corpus, and statistically analyzing those n surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifiers and decision trees . In recent research, kernel-based methods such as support vector machines have shown superior performance in supervised learning . Graph-based approaches have also gained much attention from

SECTION 20

#1732854906624

1606-409: Is inter-judge variance . WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult. While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of

1679-431: Is also organized according to open and closed categories. Closed categories , such as determiners or pronouns , are rarely given new lexemes; their function is primarily syntactic . Open categories, such as nouns and verbs , have highly active generation mechanisms and their lexemes are more semantic in nature. A central role of the lexicon is documenting established lexical norms and conventions . Lexicalization

1752-406: Is different sense inventories. In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized. Senseval (now renamed SemEval ) is an international word sense disambiguation competition, held every three years since 1998: Senseval-1 (1998), Senseval-2 (2001), Senseval-3 (2004), and its successor, SemEval (2007). The objective of the competition

1825-414: Is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale, data sets . In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there

1898-414: Is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word. WSD was first formulated as a distinct computational task during the early days of machine translation in

1971-579: Is generally used in the context of a single language. Therefore, multi-lingual speakers are generally thought to have multiple lexicons. Speakers of language variants ( Brazilian Portuguese and European Portuguese , for example) may be considered to possess a single lexicon. Thus a cash dispenser (British English) as well as an automatic teller machine or ATM in American English would be understood by both American and British speakers, despite each group using different dialects. When linguists study

2044-436: Is not a coherent concept: each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to

2117-492: Is often translated into different words in a second language depending on the sense of the word. Word-aligned bilingual corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system. Unsupervised learning is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by clustering word occurrences using some measure of similarity of context,

2190-448: Is perhaps the major impediment to solving the WSD problem. Unsupervised methods rely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases. Supervised methods depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in

2263-406: Is the vocabulary of a language or branch of knowledge (such as nautical or medical ). In linguistics , a lexicon is a language's inventory of lexemes . The word lexicon derives from Greek word λεξικόν ( lexikon ), neuter of λεξικός ( lexikos ) meaning 'of or for words'. Linguistic theories generally regard human languages as consisting of two parts: a lexicon, essentially

Word-sense disambiguation - Misplaced Pages Continue

2336-443: Is the most common of word formation strategies cross-linguistically. Comparative historical linguistics studies the evolution of languages and takes a diachronic view of the lexicon. The evolution of lexicons in different languages occurs through a parallel mechanism. Over time historical forces work to shape the lexicon, making it simpler to acquire and often creating an illusion of great regularity in language. The term "lexicon"

2409-564: Is the process by which new words, having gained widespread usage, enter the lexicon. Since lexicalization may modify lexemes phonologically and morphologically, it is possible that a single etymological source may be inserted into a single lexicon in two or more forms. These pairs, called a doublet , are often close semantically. Two examples are aptitude versus attitude and employ versus imply . The mechanisms, not mutually exclusive, are: Neologisms are new lexeme candidates which, if they gain wide usage over time, become part of

2482-586: Is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such as semantic role labeling , gloss WSD, lexical substitution , etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples). In recent years ,

2555-596: The Senseval exercises. One of the most promising trends in WSD research is using the largest corpus ever accessible, the World Wide Web , to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as information retrieval (IR). In this case, however, the reverse is also true: web search engines implement simple and robust IR techniques that can successfully mine

2628-485: The artificial intelligence field, artificial neural networks have been applied successfully to speech recognition , image analysis and adaptive control , in order to construct software agents (in computer and video games ) or autonomous robots . Neural network theory has served to identify better how the neurons in the brain function and provide the basis for efforts to create artificial intelligence. The preliminary theoretical base for contemporary neural networks

2701-647: The coarse-grained homograph level (e.g., pen as writing instrument or enclosure), but go down one level to fine-grained polysemy , and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences. Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings. Lexicographers frequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in

2774-423: The 1940s, making it one of the oldest problems in computational linguistics. Warren Weaver first introduced the problem in a computational context in his 1949 memorandum on translation. Later, Bar-Hillel (1960) argued that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge. In the 1970s, WSD was a subtask of semantic interpretation systems developed within

2847-473: The French banque – that is, 'financial bank' or rive – that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant. Finally, the very notion of " word sense " is slippery and controversial. Most people can agree in distinctions at

2920-493: The WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks: As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages: Biological neural network A neural network , also called

2993-704: The Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described in Automatic acquisition of sense-tagged corpora . Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be classified as follows: Structured: Unstructured: Comparing and evaluating different WSD systems

Word-sense disambiguation - Misplaced Pages Continue

3066-486: The brain and the other focused on the application of neural networks to artificial intelligence. The parallel distributed processing of the mid-1980s became popular under the name connectionism . The text by Rumelhart and McClelland (1986) provided a full exposition on the use of connectionism in computers to simulate neural processes. Artificial neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in

3139-412: The brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function. Theoretical and computational neuroscience is the field concerned with the analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour,

3212-465: The coarse-grained ( homograph ) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively. Disambiguation requires two strict inputs:

3285-405: The definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word. An alternative to the use of the definitions is to consider general word-sense relatedness and to compute the semantic similarity of each pair of word senses based on

3358-470: The electrical current strength decreased as the testing continued over time. Importantly, this work led to the discovery of the concept of habituation . McCulloch and Pitts (1943) also created a computational model for neural networks based on mathematics and algorithms. They called this model threshold logic. These early models paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in

3431-607: The field is closely related to cognitive and behavioural modeling. The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (neural network models) and theory (statistical learning theory and information theory ). Many models are used; defined at different levels of abstraction, and modeling different aspects of neural systems. They range from models of

3504-603: The field of WSD is performed by using WordNet as a reference sense inventory for English. WordNet is a computational lexicon that encodes concepts as synonym sets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes include Roget's Thesaurus and Misplaced Pages . More recently, BabelNet , a multilingual encyclopedic dictionary, has been used for multilingual WSD. In any real test, part-of-speech tagging and sense tagging have proven to be very closely related, with each potentially imposing constraints upon

3577-527: The field of artificial intelligence, starting with Wilks ' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck. By the 1980s large-scale lexical resources, such as the Oxford Advanced Learner's Dictionary of Current English (OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation

3650-470: The formation of memory. The general scientific community at the time was skeptical of Bain's theory because it required what appeared to be an inordinate number of neural connections within the brain. It is now apparent that the brain is exceedingly complex and that the same brain “wiring” can handle multiple problems and inputs. James' theory was similar to Bain's; however, he suggested that memories and actions resulted from electrical currents flowing among

3723-418: The greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among

SECTION 50

#1732854906624

3796-406: The head requires inflection for agreement. Compounding may result in lexemes of unwieldy proportion. This is compensated by mechanisms that reduce the length of words. A similar phenomenon has been recently shown to feature in social media also where hashtags compound to form longer-sized hashtags that are at times more popular than the individual constituent hashtags forming the compound. Compounding

3869-413: The lexicon. Dictionaries are lists of the lexicon, in alphabetical order, of a given language; usually, however, bound morphemes are not included. Items in the lexicon are called lexemes, lexical items, or word forms. Lexemes are not atomic elements but contain both phonological and morphological components. When describing the lexicon, a reductionist approach is used, trying to remain general while using

3942-847: The mechanisms used by neural circuits . A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses , are usually formed from axons to dendrites , though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion. Artificial intelligence, cognitive modelling, and artificial neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In

4015-525: The most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create. Because of the lack of training data, many word sense disambiguation algorithms use semi-supervised learning , which allows both labeled and unlabeled data. The Yarowsky algorithm

4088-454: The neologization, listed in decreasing order of phonetic resemblance to the original lexical item (in the source language): The following are examples of simultaneous external and internal lexical expansion using target language lexical items as the basic material for the neologization but still resembling the sound of the lexical item in the source language: Another mechanism involves generative devices that combine morphemes according to

4161-404: The neurons in the brain. His model, by focusing on the flow of electrical currents, did not require individual neural connections for each memory or action. C. S. Sherrington (1898) conducted experiments to test James' theory. He ran electrical currents down the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected by James, Sherrington found that

4234-552: The offset calculus, while the latter defines the similarity between two nodes. In MSSA, an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model and WordNet . For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet's glosses (i.e., short defining gloss and one or more usage example) using

4307-559: The other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning . These figures are typical for English, and may be very different from those for other languages. Another problem

4380-499: The other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/ SemEval competitions parts of speech are provided as input for the text to disambiguate). Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for

4453-484: The presence of a sufficiently rich lexical knowledge base. Also, automatically transferring knowledge in the form of semantic relations from Misplaced Pages to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting. The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate

SECTION 60

#1732854906624

4526-490: The quality of result clusters and the degree diversification of result lists. It is hoped that unsupervised learning will overcome the knowledge acquisition bottleneck because they are not dependent on manual effort. Representing words considering their context through fixed-size dense vectors ( word embeddings ) has become one of the most fundamental blocks in several NLP systems. Even though most of traditional word-embedding techniques conflate words with multiple meanings into

4599-412: The research community, and currently achieve performance close to the state of the art. The Lesk algorithm is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with

4672-612: The return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best. One problem with word sense disambiguation is deciding what the senses are, as different dictionaries and thesauruses will provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones. Most researchers continue to work on fine-grained WSD. Most research in

4745-535: The senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense. As human performance serves as the standard, it is an upper bound for computer performance. Human performance, however, is much better on coarse-grained than fine-grained distinctions, so this again is why research on coarse-grained distinctions has been put to test in recent WSD evaluation exercises. A task-independent sense inventory

4818-405: The short-term behaviour of individual neurons , through models of the dynamics of neural circuitry arising from interactions between individual neurons, to models of behaviour arising from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and their relation to learning and memory, from the individual neuron to

4891-636: The structure of neural circuits and their computational properties. While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine , acetylcholine , and serotonin on behaviour and learning. Biophysical models, such as BCM theory , have been important in understanding mechanisms for synaptic plasticity , and have had applications in both computer science and neuroscience. Lexicon A lexicon (plural: lexicons , rarely lexica )

4964-411: The system level. In August 2020 scientists reported that bi-directional connections, or added appropriate feedback connections, can accelerate and improve communication between and in modular neural networks of the brain's cerebral cortex and lower the threshold for their successful communication. They showed that adding feedback connections between a resonance pair can support successful propagation of

5037-529: The word bass in "I am cooking basses" (i.e., it's not a musical instrument). Supervised methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence, common sense and reasoning are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as feature selection , parameter optimization, and ensemble learning . Support Vector Machines and memory-based learning have been shown to be

5110-414: Was an early example of such an algorithm. It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation. The bootstrapping approach starts from a small amount of seed data for each word: either manually tagged training examples or

5183-404: Was independently proposed by Alexander Bain (1873) and William James (1890). In their work, both thoughts and body activity resulted from interactions among neurons within the brain. For Bain, every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to

5256-455: Was still knowledge-based or dictionary-based. In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques. The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses, domain adaptation , semi-supervised and unsupervised corpus-based systems, combinations of different methods, and

5329-564: Was that by Margaret Masterman and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, but had strong relationships to later work, especially Yarowsky's machine learning optimisation of

#623376