Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Natural Language Processing (NLP) Defined

Natural language processing (NLP) is a branch of artificial intelligence (AI) that enables


computers to comprehend, generate, and manipulate human language. Natural language
processing has the ability to interrogate the data with natural language text or voice. This is also
called “language in.” Most consumers have probably interacted with NLP without realizing it.
For instance, NLP is the core technology behind virtual assistants, such as the Oracle Digital
Assistant (ODA), Siri, Cortana, or Alexa. When we ask questions of these virtual assistants, NLP
is what enables them to not only understand the user’s request, but to also respond in natural
language. NLP applies both to written text and speech, and can be applied to all human
languages. Other examples of tools powered by NLP include web search, email spam filtering,
automatic translation of text or speech, document summarization, sentiment analysis, and
grammar/spell checking. For example, some email programs can automatically suggest an
appropriate reply to a message based on its content—these programs use NLP to read, analyze,
and respond to your message.
There are several other terms that are roughly synonymous with NLP. Natural language
understanding (NLU) and natural language generation (NLG) refer to using computers to
understand and produce human language, respectively. NLG has the ability to provide a verbal
description of what has happened. This is also called "language out” by summarizing by
meaningful information into text using a concept known as "grammar of graphics."

In practice, NLU is used to mean NLP. The understanding by computers of the structure and
meaning of all human languages, allowing developers and users to interact with computers using
natural sentences and communication. Computational linguistics (CL) is the scientific field that
studies computational aspects of human language, while NLP is the engineering discipline
concerned with building computational artifacts that understand, generate, or manipulate human
language.

Research on NLP began shortly after the invention of digital computers in the 1950s, and NLP
draws on both linguistics and AI. However, the major breakthroughs of the past few years have
been powered by machine learning, which is a branch of AI that develops systems that learn and
generalize from data. Deep learning is a kind of machine learning that can learn very complex
patterns from large datasets, which means that it is ideally suited to learning the complexities of
natural language from datasets sourced from the web.

Applications of Natural Language Processing


Automate routine tasks: Chatbots powered by NLP can process a large number of routine tasks
that are handled by human agents today, freeing up employees to work on more challenging and
interesting tasks. For example, chatbots and Digital Assistants can recognize a wide variety of
user requests, match them to the appropriate entry in a corporate database, and formulate an
appropriate response to the user.
Improve search: NLP can improve on keyword matching search for document and FAQ
retrieval by disambiguating word senses based on context (for example, “carrier” means
something different in biomedical and industrial contexts), matching synonyms (for example,
retrieving documents mentioning “car” given a search for “automobile”), and taking
morphological variation into account (which is important for non-English queries). Effective
NLP-powered academic search systems can dramatically improve access to relevant cutting-edge
research for doctors, lawyers, and other specialists.
Search engine optimization: NLP is a great tool for getting your business ranked higher in
online search by analyzing searches to optimize your content. Search engines use NLP to rank
their results—and knowing how to effectively use these techniques makes it easier to be ranked
above your competitors. This will lead to greater visibility for your business.
Analyzing and organizing large document collections: NLP techniques such as document
clustering and topic modeling simplify the task of understanding the diversity of content in large
document collections, such as corporate reports, news articles, or scientific documents. These
techniques are often used in legal discovery purposes.
Social media analytics: NLP can analyze customer reviews and social media comments to make
better sense of huge volumes of information. Sentiment analysis identifies positive and negative
comments in a stream of social-media comments, providing a direct measure of customer
sentiment in real time. This can lead to huge payoffs down the line, such as increased customer
satisfaction and revenue.
Market insights: With NLP working to analyze the language of your business’ customers,
you’ll have a better handle on what they want, and also a better idea of how to communicate with
them. Aspect-oriented sentiment analysis detects the sentiment associated with specific aspects
or products in social media (for example, “the keyboard is great, but the screen is too dim”),
providing directly actionable information for product design and marketing.
Moderating content: If your business attracts large amounts of user or customer comments,
NLP enables you to moderate what’s being said in order to maintain quality and civility by
analyzing not only the words, but also the tone and intent of comments.

Industries Using Natural Language Processing


NLP simplifies and automates a wide range of business processes, especially ones that involve
large amounts of unstructured text like emails, surveys, social media conversations, and more.
With NLP, businesses are better able to analyze their data to help make the right decisions. Here
are just a few examples of practical applications of NLP:

 Healthcare: As healthcare systems all over the world move to electronic medical
records, they are encountering large amounts of unstructured data. NLP can be used to
analyze and gain new insights into health records.
 Legal: To prepare for a case, lawyers must often spend hours examining large
collections of documents and searching for material relevant to a specific case. NLP
technology can automate the process of legal discovery, cutting down on both time
and human error by sifting through large volumes of documents.
 Finance: The financial world moves extremely fast, and any competitive advantage is
important. In the financial field, traders use NLP technology to automatically mine
information from corporate documents and news releases to extract information
relevant to their portfolios and trading decisions.
 Customer service: Many large companies are using virtual assistants or chatbots to
help answer basic customer inquiries and information requests (such as FAQs),
passing on complex questions to humans when necessary.
 Insurance: Large insurance companies are using NLP to sift through documents and
reports related to claims, in an effort to streamline the way business gets done.

NLP Technology Overview


Machine learning models for NLP: We mentioned earlier that modern NLP relies heavily on
an approach to AI called machine learning. Machine learning make predictions by generalizing
over examples in a dataset. This dataset is called the training data, and machine learning
algorithms train on this training data to produce a machine learning model that accomplishes a
target task.
For example, sentiment analysis training data consists of sentences together with their sentiment
(for example, positive, negative, or neutral sentiment). A machine-learning algorithm reads this
dataset and produces a model which takes sentences as input and returns their sentiments. This
kind of model, which takes sentences or documents as inputs and returns a label for that input, is
called a document classification model. Document classifiers can also be used to classify
documents by the topics they mention (for example, as sports, finance, politics, etc.).

Another kind of model is used to recognize and classify entities in documents. For each word in
a document, the model predicts whether that word is part of an entity mention, and if so, what
kind of entity is involved. For example, in “XYZ Corp shares traded for $28 yesterday”, “XYZ
Corp” is a company entity, “$28” is a currency amount, and “yesterday” is a date. The training
data for entity recognition is a collection of texts, where each word is labeled with the kinds of
entities the word refers to. This kind of model, which produces a label for each word in the input,
is called a sequence labeling model.

Sequence to sequence models are a very recent addition to the family of models used in NLP. A
sequence to sequence (or seq2seq) model takes an entire sentence or document as input (as in a
document classifier) but it produces a sentence or some other sequence (for example, a computer
program) as output. (A document classifier only produces a single symbol as output). Example
applications of seq2seq models include machine translation, which for example, takes an English
sentence as input and returns its French sentence as output; document summarization (where the
output is a summary of the input); and semantic parsing (where the input is a query or request in
English, and the output is a computer program implementing that request).
Deep learning, pretrained models, and transfer learning: Deep learning is the most widely-
used kind of machine learning in NLP. In the 1980s, researchers developed neural networks, in
which a large number of primitive machine learning models are combined into a single network:
by analogy with brains, the simple machine learning models are sometimes called “neurons.”
These neurons are arranged in layers, and a deep neural network is one with many layers. Deep
learning is machine learning using deep neural network models.
Because of their complexity, generally it takes a lot of data to train a deep neural network, and
processing it takes a lot of compute power and time. Modern deep neural network NLP models
are trained from a diverse array of sources, such as all of Wikipedia and data scraped from the
web. The training data might be on the order of 10 GB or more in size, and it might take a week
or more on a high-performance cluster to train the deep neural network. (Researchers find that
training even deeper models from even larger datasets have even higher performance, so
currently there is a race to train bigger and bigger models from larger and larger datasets).
The voracious data and compute requirements of Deep Neural Networks would seem to severely
limit their usefulness. However, transfer learning enables a trained deep neural network to be
further trained to achieve a new task with much less training data and compute effort. The
simplest kind of transfer learning is called fine tuning. It consists simply of first training the
model on a large generic dataset (for example, Wikipedia) and then further training (“fine-
tuning”) the model on a much smaller task-specific dataset that is labeled with the actual target
task. Perhaps surprisingly, the fine-tuning datasets can be extremely small, maybe containing
only hundreds or even tens of training examples, and fine-tuning training only requires minutes
on a single CPU. Transfer learning makes it easy to deploy deep learning models throughout the
enterprise.

There is now an entire ecosystem of pr

You might also like