Long Short-Term Memory Networks

Home Glossary Item Long Short-Term Memory Networks
« Back to Glossary Index

Long Short-Term Memory (LSTM) Networks are a type of recurrent neural network (RNN) devised to address the problem of vanishing and exploding gradients, which is a common issue in traditional RNNs. Developed by Sepp Hochreiter and Jürgen Schmidhuber in 1997, LSTM networks are designed to remember important information for long periods and disregard irrelevant data, making them exceptionally suited for tasks that involve processing sequences of data and recognizing patterns over time, like speech recognition, language modeling, and time series forecasting.

 

The distinguishing feature of LSTM networks is their memory cell, which includes self-connected recurrent edges to preserve the “state” of the network over time, along with certain structures called gates. These gates – specifically, the input, forget, and output gates – control the flow of information into and out of a memory cell. By effectively learning to use these gates, an LSTM can decide what it should remember or forget during sequential data processing, allowing it to keep or erase information in its state over the long or short term.

LSTMs have demonstrated their effectiveness in managing sequential data, rendering them a popular choice for tasks like natural language processing and machine translation. For example, they can understand the context in a sentence by remembering what words have appeared before and using that information to better understand the meaning of following words. Despite their higher computational complexity compared to traditional RNNs, LSTMs, due to their superior performance with sequential data, remain a pivotal tool in the era of deep learning.

« Back to Glossary Index

allix