Natural Language Processing

August 25, 2014 by Robyn DeAngelis
Filed under: Language, Translation Services 

Natural Language Processing or (NLP) can best be described as the field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human languages. NLP ties into translation through use of Machine Translation (MT), which is known as the task of automatically, converting one natural language into another, preserving the meaning of the input text, and producing fluent text in the output language. Other types of NLP include information extraction, sentiment analysis and question answering.

NLP was “discovered” in the 1950s by Alan Turing, who’s “Turing Test” (test of a machine’s ability to exhibit intelligent behavior equal to, or indistinguishable from, that of a human) is one the determinants of intelligence.  Later, in 1954, the Georgetown experiment (involving the fully automated translation of more than 60 Russian sentences into English) was conducted with the hope that within five years, machine translation would be a solved problem.

In 1964, a committee of seven scientists, the Automatic Language Processing Advisory Committee, was established by the US government for the purpose of evaluating the progress in machine translation. The findings of this committee, known as the ALPAC report was issued in 1966, gained notoriety for being very skeptical of research done in machine translation so far, and emphasized the need for future research and eventually caused the U. S. Government to reduce its funding of the topic dramatically.

Later in the 1960s, many other advances in NLP were made, most notably, the ELIZA computer program (based on Carl Rogers and his client-centered brand of talk therapy) which was an early example of primitive NLP.  ELIZA operated by processing users’ responses to scripts, the most famous of which was DOCTOR, a simulation of a Rogerian psychotherapist.  Using almost no information about human thought or emotion, DOCTOR sometimes provided a startlingly human-like interaction.

Throughout the 1970s and 80s, new trends in NLP started to emerge. Many programmers began to write “conceptual ontologies” in the 1970s which structured real-world information into computer-understandable data and chatterbots (computer programs designed to simulate an intelligent conversation with one or more human users via auditory or textual methods, primarily for engaging in “small talk”) were also employed. Up to the 1980s, most NLP systems were based on complex sets of handwritten rules. During the late 1980s, however, we started to see the development of the cache language models upon which many speech recognition systems now rely.  While Google has made great strides with NLP and MT, the challenge is still maintaining the integrity of sentence structure.

Moving forward, try to imagine talking to your computer and having it talk back to you in a fluid style.  Apple’s Siri is a great example of where NLP is headed but it is only the tip of the iceberg. According to my research, as NLP technology advances computers will have a much easier time understanding us.

 

 

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Facebook
  • Twitter
  • LinkedIn
  • Reddit
  • Google Bookmarks

Comments

Tell me what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!





Spam protection by WP Captcha-Free