Saturday, March 23, 2019

Natural Language Processing :: essays research papers

Natural actors line Processing     There have been exalted hopes for Natural Language Processing. NaturalLanguage Processing, also known patently as human language technology, is part of the broader field ofArtificial tidings, the effort towards making machines think. Computers may see intelligent as they crunch numbers and process breeding with blazingspeed. In truth, ready reckoners are nothing besides dumb slaves who unless understand on oroff and are limited to little instructions. But since the invention of thecomputer, scientists have been attempting to make computers not only appearintelligent but be intelligent. A truly intelligent computer would not belimited to rigid computer language commands, but instead be able to process andunderstand the English language. This is the sentiment behind Natural LanguageProcessing.     The phases a message would go by dint of during NLP would consist ofmessage, syntax, semantics, pragm atics, and intended meaning. (M. A. Fischer,1987) Syntax is the grammatical structure. Semantics is the literal meaning.Pragmatics is institution knowledge, knowledge of the context, and a model of thesender. When syntax, semantics, and pragmatics are applied, accurate NaturalLanguage Processing will exist.     Alan Turing predicted of NLP in 1950 (Daniel Crevier, 1994, page 9)     "I deal that in about fifty years time it will be thinkable toprogram computers .... to make them play the imitation game so wholesome that anaverage interrogator will not have to a greater extent than 70 per cent chance of making theright identification after tail fin minutes of questioning."     But in 1950, the current computer technology was limited. Because ofthese limitations, NLP programs of that day focused on exploiting the strengthsthe computers did have. For example, a program called SYNTHEX tried to temptthe meaning of se ntences by looking up each word in its encyclopedia. Anotherearly approach was Noam Chomskys at MIT. He believed that language could beanalyzed without any reference to semantics or pragmatics, just by justlooking at the syntax. Both of these techniques did not work. Scientistsrealized that their Artificial Intelligence programs did not think like peopledo and since people are much more intelligent than those programs they decidedto make their programs think more closely like a person would. So in the late1950s, scientists shifted from trying to exploit the capabilities of computersto trying to emulate the human brain. (Daniel Crevier, 1994)     Ross Quillian at Carnegie Mellon cherished to try to program theassociative aspects of human memory to create infract NLP programs. (DanielCrevier, 1994) Quillians idea was to determine the meaning of a word by thewords around it. For example, look at these sentences After the strike, the

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.