Materi Pendukung : T0264P21_1 Natural language processing From Wikipedia, the free encyclopedia Natural language processing (NLP) is a subfield of artificial intelligence and linguistics. It studies the problems of automated generation and understanding of natural human languages. Natural language generation systems convert information from computer databases into normal-sounding human language, and natural language understanding systems convert samples of human language into more formal representations that are easier for computer programs to manipulate. Contents [hide] 1 Natural language processing 2 The major tasks in NLP 3 Some problems which make NLP difficult 4 Statistical NLP 5 See also 6 External links o 6.1 Resources o 6.2 Research and development groups o 6.3 Implementations Natural language processing Early systems such as SHRDLU, working in restricted "blocks worlds" with restricted vocabularies, worked extremely well, leading researchers to excessive optimism which was soon lost when the systems were extended to more realistic situations with realworld ambiguity and complexity. Natural language understanding is sometimes referred to as an AI-complete problem, because natural language recognition seems to require extensive knowledge about the outside world and the ability to manipulate it. The definition of "understanding" is one of the major problems in natural language processing. Some examples of the problems faced by natural language understanding systems: The sentences We gave the monkeys the bananas because they were hungry and We gave the monkeys the bananas because they were over-ripe have the same surface grammatical structure. However, in one of them the word they refers to the monkeys, in the other it refers to the bananas: the sentence cannot be understood properly without knowledge of the properties and behaviour of monkeys and bananas. A string of words may be interpreted in myriad ways. For example, the string Time flies like an arrow may be interpreted in a variety of ways: o time moves quickly just like an arrow does; o measure the speed of flying insects like you would measure that of an arrow - i.e. (You should) time flies like you would an arrow.; o measure the speed of flying insects like an arrow would - i.e. Time flies in the same way that an arrow would (time them).; o measure the speed of flying insects that are like arrows - i.e. Time those flies that are like arrows; o a type of flying insect, "time-flies," enjoy arrows (compare Fruit flies like a banana.) The word "time" alone can be interpreted as three different parts of speech, (noun in the first example, verb in 2, 3, 4, and adjective in 5). English is particularly challenging in this regard because it has little inflectional morphology to distinguish between parts of speech. English and several other languages don't specify which word an adjective applies to. For example, in the string "pretty little girls' school". o Does the school look little? o Do the girls look little? o Do the girls look pretty? o Does the school look pretty? The major tasks in NLP Text to speech Speech recognition Natural language generation Machine translation Question answering Information retrieval Information extraction Text-proofing Translation technology Automatic summarization Some problems which make NLP difficult Speech segmentation In most spoken languages, the sounds representing successive letters blend into each other, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, in natural speech there are hardly any pauses between successive words; the location of those boundaries usually must take into account grammatical and semantical constraints, as well as the context. Text segmentation Some written languages like Chinese and Thai do not have signal word boundaries either, so any significant text parsing usually requires the identification of word boundaries, which is often a non-trivial task. Word sense disambiguation Many words have more than one meaning; we have to select the meaning which makes the most sense in context. Syntactic ambiguity The grammar for natural languages is ambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information. Imperfect or irregular input Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, OCR errors in texts. Speech acts and plans Sentences often don't mean what they literally say; for instance a good answer to "Can you pass the salt" is to pass the salt; in most contexts "Yes" is not a good answer, although "No" is better and "I'm afraid that I can't see it" is better yet. Or again, if a class was not offered last year, "The class was not offered last year" is a better answer to the question "How many students failed the class last year?" than "None" is. Statistical NLP Statistical natural language processing uses stochastic, probabilistic and statistical methods to resolve some of the difficulties discussed above, especially those which arise because longer sentences are highly ambiguous when processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of corpora and Markov models. The technology for statistical NLP comes mainly from machine learning and data mining, both of which are fields of artificial intelligence that involve learning from data. See also the Inform 7 programming language The fictional universal translator computational linguistics controlled natural language information retrieval latent semantic indexing lojban / loglan Transderivational search Biomedical text mining Resources Natural Language Processing In Spanish. Resources for Text, Speech and Language Processing Natural Language Processing Blog About Opinion, Language, and Blogs Research and development groups Natural Language Group at the Information Sciences Institute Survey of the State of the Art in Human Language Technology University of Edinburgh Natural Language Processing Group Natural Language and Information Processing Group at the University of Cambridge Center for Language and Speech Processing at The Johns Hopkins University Stanford Natural Language Processing Group DNLP - Dalhousie Natural Language Processing Group 2004 International Workshop on Natural Language Understanding and Cognitive Science CLAC: Computational Linguistics At Concordia TCC: Cognitive and Communication Technologies (TCC) at ITC-Irst Center for Natural Language Processing at Syracuse University Center for Spoken Language Understanding at Oregon Graduate Institute, OHSU Cornell Natural Language Processing Group Implementations OpenNLP DELPH-IN: integrated technology for deep language processing LinguaStream: a generic platform for Natural Language Processing experimentation GATE: a Java Library for Text Engineering Natural Language ToolKit for Python - comprehensive tutorial MARF: Modular Audio Recognition Framework for voice and statistical NLP processing FreeLing: an open source suite of language analyzers LingPipe: Java Natural Language Processing Toolkit The wraetlic toolkit Retrieved from "http://en.wikipedia.org/wiki/Natural_language_processing" This page was last modified 07:22, 3 July 2006. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc.