Misplaced Pages

Named-entity recognition

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Named-entity recognition ( NER ) (also known as (named) entity identification , entity chunking , and entity extraction ) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes , time expressions, quantities, monetary values, percentages, etc.

#715284

36-411: Most research on NER/NEE systems has been structured as taking an unannotated block of text, such as this one: Jim bought 300 shares of Acme Corp. in 2006. And producing an annotated block of text that highlights the names of entities: [Jim] Person bought 300 shares of [Acme Corp.] Organization in [2006] Time . In this example, a person name consisting of one token, a two-token company name and

72-644: A balanced measurement of performance. Macro F1 is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties. Micro F1 is the harmonic mean of micro precision (number of correct predictions normalized by false positives) and micro recall (number of correct predictions normalized by false negatives). Since in multi-class evaluation

108-410: A given purpose. For example, one system might always omit titles such as "Ms." or "Ph.D.", but be compared to a system or ground-truth data that expects titles to be included. In that case, every such name is treated as an error. Because of such issues, it is important actually to examine the kinds of errors, and decide how important they are given one's goals and requirements. Evaluation models based on

144-515: A great deal of interest in entity identification in the molecular biology , bioinformatics , and medical natural language processing communities. The most common entity of interest in that domain has been names of genes and gene products. There has been also considerable interest in the recognition of chemical entities and drugs in the context of the CHEMDNER competition, with 27 teams participating in this task. Despite high F1 numbers reported on

180-418: A referent by its properties (see also De dicto and de re ), and names for kinds of things as opposed to individuals (for example "Bank"). Full named-entity recognition is often broken down, conceptually and possibly also in implementations, as two distinct problems: detection of names, and classification of the names by the type of entity they refer to (e.g. person, organization, or location). The first phase

216-489: A temporal expression have been detected and classified. State-of-the-art NER systems for English produce near-human performance. For example, the best system entering MUC-7 scored 93.39% of F-measure while human annotators scored 97.60% and 96.95%. Notable NER platforms include: In the expression named entity , the word named restricts the task to those entities for which one or many strings, such as words or phrases, stand (fairly) consistently for some referent. This

252-512: A token-by-token matching have been proposed. Such models may be given partial credit for overlapping matches (such as using the Intersection over Union criterion). They allow a finer grained evaluation and comparison of extraction systems. NER systems have been created that use linguistic grammar -based techniques as well as statistical models such as machine learning . Hand-crafted grammar-based systems typically obtain better precision, but at

288-456: A variant of the F1 score has been defined as follows: It follows from the above definition that any prediction that misses a single token, includes a spurious token, or has the wrong class, is a hard error and does not contribute positively to either precision or recall. Thus, this measure may be said to be pessimistic: it can be the case that many "errors" are close to correct, and might be adequate for

324-513: Is a stub . You can help Misplaced Pages by expanding it . F1 score In statistical analysis of binary classification and information retrieval systems, the F-score or F-measure is a measure of predictive performance. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and

360-553: Is also used in machine learning . However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient , Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier. The F-score has been widely used in the natural language processing literature, such as in the evaluation of named entity recognition and word segmentation . The F 1 score

396-475: Is arguable that the definition of named entity is loosened in such cases for practical reasons. The definition of the term named entity is therefore not strict and often has to be explained in the context in which it is used. Certain hierarchies of named entity types have been proposed in the literature. BBN categories, proposed in 2002, are used for question answering and consists of 29 types and 64 subtypes. Sekine's extended hierarchy, proposed in 2002,

SECTION 10

#1732849009716

432-556: Is closely related to rigid designators , as defined by Kripke , although in practice NER deals with many names and referents that are not philosophically "rigid". For instance, the automotive company created by Henry Ford in 1903 can be referred to as Ford or Ford Motor Company , although "Ford" can refer to many other entities as well (see Ford ). Rigid designators include proper names as well as terms for certain biological species and substances, but exclude pronouns (such as "it"; see coreference resolution ), descriptions that pick out

468-690: Is devising models to deal with linguistically complex contexts such as Twitter and search queries. There are some researchers who did some comparisons about the NER performances from different statistical models such as HMM ( hidden Markov model ), ME ( maximum entropy ), and CRF ( conditional random fields ), and feature sets. And some researchers recently proposed graph-based semi-supervised learning model for language specific NER tasks. A recently emerging task of identifying "important expressions" in text and cross-linking them to Misplaced Pages can be seen as an instance of extremely fine-grained named-entity recognition, where

504-556: Is involved in tuning NER systems to perform well in a new domain; this is true for both rule-based and trainable statistical systems. Early work in NER systems in the 1990s was aimed primarily at extraction from journalistic articles. Attention then turned to processing of military dispatches and reports. Later stages of the automatic content extraction (ACE) evaluation also included several types of informal text styles, such as weblogs and text transcripts from conversational telephone speech conversations. Since about 1998, there has been

540-455: Is made of 200 subtypes. More recently, in 2011 Ritter used a hierarchy based on common Freebase entity types in ground-breaking experiments on NER over social media text. To evaluate the quality of an NER system's output, several measures have been defined. The usual measures are called precision, recall , and F1 score . However, several issues remain in just how to calculate those values. These statistical measures work reasonably well for

576-627: Is met by the P4 metric definition, which is sometimes indicated as a symmetrical extension of F 1 . While the F-measure is the harmonic mean of recall and precision, the Fowlkes–Mallows index is their geometric mean . The F-score is also used for evaluating classification problems with more than two classes ( Multiclass classification ). A common method is to average the F-score over each class, aiming at

612-445: Is problematic. One way to address this issue (see e.g., Siblini et al., 2020 ) is to use a standard class ratio r 0 {\displaystyle r_{0}} when making such comparisons. The F-score is often used in the field of information retrieval for measuring search , document classification , and query classification performance. It is particularly relevant in applications which are primarily concerned with

648-428: Is related to the field of binary classification where recall is often termed "sensitivity". Precision-recall curve, and thus the F β {\displaystyle F_{\beta }} score, explicitly depends on the ratio r {\displaystyle r} of positive to negative test cases. This means that comparison of the F-score across different problems with differing class ratios

684-484: Is the Dice coefficient of the set of retrieved items and the set of relevant items. David Hand and others criticize the widespread use of the F 1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem. According to Davide Chicco and Giuseppe Jurman,

720-709: Is the harmonic mean of precision and recall: A more general F score, F β {\displaystyle F_{\beta }} , that uses a positive real factor β {\displaystyle \beta } , where β {\displaystyle \beta } is chosen such that recall is considered β {\displaystyle \beta } times as important as precision, is: In terms of Type I and type II errors this becomes: Two commonly used values for β {\displaystyle \beta } are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision. The F-measure

756-536: Is typically simplified to a segmentation problem: names are defined to be contiguous spans of tokens, with no nesting, so that "Bank of America" is a single name, disregarding the fact that inside this name, the substring "America" is itself a name. This segmentation problem is formally similar to chunking . The second phase requires choosing an ontology by which to organize categories of things. Temporal expressions and some numerical expressions (e.g., money, percentages, etc.) may also be considered as named entities in

SECTION 20

#1732849009716

792-460: The F 1 score is less truthful and informative than the Matthews correlation coefficient (MCC) in binary evaluation classification. David M W Powers has pointed out that F 1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and

828-556: The MUC-7 dataset, the problem of named-entity recognition is far from being solved. The main efforts are directed to reducing the annotations labor by employing semi-supervised learning , robust performance across domains and scaling up to fine-grained entity types. In recent years, many projects have turned to crowdsourcing , which is a promising solution to obtain high-quality aggregate human judgments for supervised and semi-supervised machine learning approaches to NER. Another challenging task

864-471: The context of the NER task. While some instances of these types are good examples of rigid designators (e.g., the year 2001) there are also many invalid ones (e.g., I take my vacations in “June”). In the first case, the year 2001 refers to the 2001st year of the Gregorian calendar . In the second case, the month June may refer to the month of an undefined year ( past June , next June , every June , etc.). It

900-463: The correct type). This suffers from at least two problems: first, the vast majority of tokens in real-world text are not part of entity names, so the baseline accuracy (always predict "not an entity") is extravagantly high, typically >90%; and second, mispredicting the full span of an entity name is not properly penalized (finding only a person's first name when his last name follows might be scored as ½ accuracy). In academic conferences such as CoNLL,

936-609: The cost of lower recall and months of work by experienced computational linguists . Statistical NER systems typically require a large amount of manually annotated training data. Semisupervised approaches have been suggested to avoid part of the annotation effort. Many different classifier types have been used to perform machine-learned NER, with conditional random fields being a typical choice. In 2001, research indicated that even state-of-the-art NER systems were brittle, meaning that NER systems developed for one domain did not typically perform well on other domains. Considerable effort

972-401: The development of new and better methods of information extraction . The character of this competition, many concurrent research teams competing against one another—required the development of standards for evaluation, e.g. the adoption of metrics like precision and recall . Only for the first conference (MUC-1) could the participant choose the output format for the extracted information. From

1008-490: The obvious cases of finding or missing a real entity exactly; and for finding a non-entity. However, NER can fail in many other ways, many of which are arguably "partially correct", and should not be counted as complete success or failures. For example, identifying a real entity, but: One overly simple method of measuring accuracy is merely to count what fraction of all tokens in the text were correctly or incorrectly identified as part of entity references (or as being entities of

1044-557: The other. The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if precision and recall are zero. The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992). The traditional F-measure or balanced F-score ( F 1 score )

1080-406: The positive class and where the positive class is rare relative to the negative class. Earlier works focused primarily on the F 1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall and so F β {\displaystyle F_{\beta }} is seen in wide application. The F-score

1116-586: The recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value , and recall is also known as sensitivity in diagnostic binary classification. The F 1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic F β {\displaystyle F_{\beta }} score applies additional weights, valuing one of precision or recall more than

Named-entity recognition - Misplaced Pages Continue

1152-406: The second conference the output format, by which the participants' systems would be evaluated, was prescribed. For each topic fields were given, which had to be filled with information from the text. Typical fields were, for example, the cause, the agent, the time and place of an event, the consequences etc. The number of fields increased from conference to conference. At the sixth conference (MUC-6)

1188-446: The task of recognition of named entities and coreference was added. For named entity all phrases in the text were supposed to be marked as person, location, organization, time or quantity. The topics and text sources, which were processed, show a continuous move from military to civil themes, which mirrored the change in business interest in information extraction taking place at the time. This computer science article

1224-404: The true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation. Another source of critique of F 1 is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism

1260-825: The types are the actual Misplaced Pages pages describing the (potentially ambiguous) concepts. Below is an example output of a Wikification system: Another field that has seen progress but remains challenging is the application of NER to Twitter and other microblogs, considered "noisy" due to non-standard orthography, shortness and informality of texts. NER challenges in English Tweets have been organized by research communities to compare performances of various approaches, such as bidirectional LSTMs , Learning-to-Search, or CRFs. Message Understanding Conference The Message Understanding Conferences ( MUC ) for computing and computer science , were initiated and financed by DARPA (Defense Advanced Research Projects Agency) to encourage

1296-628: Was derived so that F β {\displaystyle F_{\beta }} "measures the effectiveness of retrieval with respect to a user who attaches β {\displaystyle \beta } times as much importance to recall as precision". It is based on Van Rijsbergen 's effectiveness measure Their relationship is F β = 1 − E {\displaystyle F_{\beta }=1-E} where α = 1 1 + β 2 {\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}} . This

#715284