Skip to main content
N
n
Glossary Term

Natural language processing

Discover how Natural Language Processing (NLP) became the driving force behind today’s generative AI revolution. From early translation tools to the rise of large language models, this entry unpacks NLP’s journey and impact.

By Tech Brew Staff

less than 3 min read

Back to Glossary

Definition:

With the ascendance of foundation models, much of the “AI” in the news these days involves natural language processing (NLP). Natural language is the interface for all generative AI applications, whether the output is text, images, video, code, or search results.

NLP is a field of machine learning concerned with recognizing, parsing, and generating human language. It spans functions like speech recognition, machine translation, text-to-speech, sentiment analysis, and language generation. It evolved from computational linguistics, which combines computer science and linguistics, and sub-fields include natural language understanding (NLU) and natural language generation (NLG).

In this moment

NLP is the backbone of the current generative AI era, which is built on large language models or text-to-image diffusion models. Before LLMs, some of the most widespread commercial products using forms of NLP included search engines like Google, voice platforms like Siri and Alexa, translation services, and spelling and grammar check tools. Throughout the 2010s, big tech companies gradually revamped all of these tools with deep learning, or machine learning powered by breakthroughs in neural networks.

In 2011, IBM Watson drew attention to the field when it bested top human contestants at Jeopardy! in an era before widespread neural networks.

In 2013 and 2014, researchers developed techniques like Word2Vec and GloVe for encoding words into vectors, making them easier for neural networks to process. This enabled the rise of sequence-to-sequence models in 2014, allowing for neural machine translation. That paved the way for the attention mechanism in 2015, in which neural networks could prioritize different parts of a passage, then transformer architectures in 2017, which set the stage for the LLM era.