Misplaced Pages

NLU

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Natural language understanding ( NLU ) or natural language interpretation ( NLI ) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension . NLU has been considered an AI-hard problem.

#11988

28-576: NLU may refer to: Natural-language understanding , in computational linguistics and AI National Labor Union , United States, 1866–1873 National Law Universities , in India National-Louis University , Illinois, United States University of Louisiana at Monroe (called Northeast Louisiana University 1969–1999), United States Felipe Ángeles International Airport , Zumpango, Mexico (opened 2022; IATA code: NLU) Insel Air Aruba ,

56-481: A Dutch Caribbean airline (2012–2017; ICAO code: NLU) Topics referred to by the same term [REDACTED] This disambiguation page lists articles associated with the title NLU . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=NLU&oldid=1221199529 " Category : Disambiguation pages Hidden categories: Short description

84-586: A case-based physician; and EXPEDITOR, a case-based logistics manager. Kolodner's classic work in this area, Case-based Learning (1993), has been cited thousands of times by researchers. Her research interests are the implications and applications of cognition to education and educational technology, artificial intelligence, cognitive science, case-based reasoning, novice-expert evolution, the role of experience in expert and common-sense reasoning, design cognition, creativity, design of decision-aiding tools, and interactive learning environments. Kolodner has published

112-507: A company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-driven graphical user interfaces , Symantec changed direction. A number of other commercial efforts were started around the same time, e.g. , Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp. In 1983, Michael Dyer developed

140-547: A dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a database of real-world knowledge or a rich lexicon . Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by Ask.com . In 1969, Roger Schank at Stanford University introduced

168-446: A meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork." The umbrella term "natural language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to robots , to highly complex endeavors such as

196-399: A number of years. In 1971, Terry Winograd finished writing SHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field. Winograd continued to be a major influence in the field with

224-443: A small range of applications. Narrow but deep systems explore and model mechanisms of understanding, but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity, but they are still somewhat shallow. Systems that are both very broad and very deep are beyond

252-428: A system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest, English-like command interpreters require minimal complexity, but have

280-632: Is a Regents' Professor Emerita of Computing and Cognitive Science in the School of Interactive Computing in Georgia Tech's College of Computing. She spent the 1996-97 academic year as a Visiting Professor Hebrew University of Jerusalem in Israel . From August, 2010 until July, 2014, she was on loan to The National Science Foundation , where she was a Program Officer in the CISE and EHR Directorates and had responsibility for

308-498: Is different from Wikidata All article disambiguation pages All disambiguation pages Natural-language understanding There is considerable commercial interest in the field because of its application to automated reasoning , machine translation , question answering , news-gathering, text categorization , voice-activation , archiving, and large-scale content analysis . The program STUDENT , written in 1964 by Daniel Bobrow for his PhD dissertation at MIT ,

SECTION 10

#1732855166012

336-639: Is generally achieved by mapping the derived meaning into a set of assertions in predicate logic , then using logical deduction to arrive at conclusions. Therefore, systems based on functional languages such as Lisp need to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the language Prolog generally rely on an extension of the built-in logical representation framework. The management of context in NLU can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to

364-442: Is one of the earliest known attempts at NLU by a computer. Eight years after John McCarthy coined the term artificial intelligence , Bobrow's dissertation (titled Natural Language Input for a Computer Problem Solving System ) showed how a computer could understand simple natural language input to solve algebra word problems. A year later, in 1965, Joseph Weizenbaum at MIT wrote ELIZA , an interactive program that carried on

392-536: Is working toward a set of projects that will integrate learning technologies coherently to support disciplinary and everyday learning, support project-based pedagogy that works, and connect to the best in curriculum for active learning. As of July, 2020, she Kolodner graduated with a Bachelor of Arts degree in math and computer science from Brandeis University in 1976. She then completed her Master of Science degree in computer science in 1977 and her PhD in computer science in 1980 from Yale University . Kolodner

420-872: The Georgia Institute of Technology . She was Founding Editor in Chief of The Journal of the Learning Sciences and served in that role for 19 years. She was Founding Executive Officer of the International Society of the Learning Sciences (ISLS). From August, 2010 through July, 2014, she was a program officer at the National Science Foundation and headed up the Cyberlearning and Future Learning Technologies program (originally called Cyberlearning: Transforming Education ). Since finishing at NSF, she

448-468: The Patom Theory , supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and e-commerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing. According to Wibe Wagemans, "To have

476-566: The conceptual dependency theory for NLU. This model, partially influenced by the work of Sydney Lamb , was extensively used by Schank's students at Yale University , such as Robert Wilensky , Wendy Lehnert , and Janet Kolodner . In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input. Instead of phrase structure rules ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for

504-407: The dBase system whose easy-to-use syntax effectively launched the personal computer database industry. Systems with an easy to use or English-like syntax are, however, quite distinct from systems that use a rich lexicon and include an internal representation (often as first order logic ) of the semantics of natural language sentences. Hence the breadth and depth of "understanding" aimed at by

532-717: The formal modeling of context, each with specific strengths and weaknesses. Janet Kolodner Janet Lynne Kolodner is an American cognitive scientist and learning scientist. She is a Professor of the Practice at the Lynch School of Education at Boston College and co-lead of the MA Program in Learning Engineering. She is also Regents' Professor Emerita in the School of Interactive Computing , College of Computing at

560-535: The BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert. The third millennium saw the introduction of systems using machine learning for text classification, such as the IBM Watson . However, experts debate how much "understanding" such systems demonstrate: e.g. , according to John Searle , Watson did not even understand the questions. John Ball , cognitive scientist and inventor of

588-606: The Cyberlearning: Transforming Education program (renamed Cyberlearning and Future Learning Technologies and, in 2020, RETTL). In 1992, Kolodner was elected a fellow in the Association for the Advancement of Artificial Intelligence (AAAI) for "pioneering research on case-based reasoning and learning, including memory organization, information retrieval, problem solving, and knowledge acquisition." In 2017, she

SECTION 20

#1732855166012

616-604: The comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation. These range from naive semantics or stochastic semantic analysis to the use of pragmatics to derive meaning from context. Semantic parsers convert natural-language texts into formal meaning representations. Advanced applications of NLU also attempt to incorporate logical inference within their framework. This

644-511: The current state of the art. Regardless of the approach used, most NLU systems share some common components. The system needs a lexicon of the language and a parser and grammar rules to break sentences into an internal representation. The construction of a rich lexicon with a suitable ontology requires significant effort, e.g. , the Wordnet lexicon required many person-years of effort. The system also needs theory from semantics to guide

672-476: The full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instance text classification for the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text, but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata. Throughout

700-424: The publication of his book Language as a Cognitive Process . At Stanford, Winograd would later advise Larry Page , who co-founded Google . In the 1970s and 1980s, the natural language processing group at SRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken, e.g. , in 1982 Gary Hendrix formed Symantec Corporation originally as

728-419: The results of previous cases are applied to new situations, cutting down the complexity of the reasoning necessary in later situations and allowing a problem solver to anticipate and avoid previously-made mistakes. Automated case-based reasoners from her lab include MEDIATOR and PERSUADER, common sense and expert mediation programs; JULIA, a case-based design problem solver; CELIA, a case-based car mechanic; MEDIC,

756-542: The years various attempts at processing natural language or English-like sentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example, Wayne Ratliff originally developed the Vulcan program with an English-like syntax to mimic the English speaking computer in Star Trek . Vulcan later became

784-523: Was elected an Inaugural Fellow of the International Society of the Learning Sciences (ISLS). Kolodner's research addresses issues in learning, memory, and problem solving, both in computers and in people. She pioneered the computer reasoning method called case-based reasoning , a way of solving problems based on analogies to past experiences, and her lab emphasized case-based reasoning for situations of real-world complexity. In case-based reasoning,

#11988