Underfitting

Home Glossary Item Underfitting
« Back to Glossary Index

Underfitting refers to a scenario in which a machine learning model fails to capture the underlying patterns and relationships within the training data. It occurs when the model is too simplistic or lacks the capacity to represent the complexity of the data. As a result, the model’s predictions are inaccurate and exhibit poor performance not only on the training data but also on new, unseen data.

 

Underfitting can arise for various reasons, such as using an overly simple model architecture, insufficient training, or inadequate feature representation. When a model underfits, it tends to generalize poorly, leading to high bias. In the context of a learning curve, underfitting is indicated by a convergence of the training and validation errors at a relatively high level.

 

Addressing underfitting often requires increasing the model’s complexity, using more relevant features, or extending the training duration. Techniques like increasing the number of model layers, adjusting hyperparameters, or collecting more diverse and comprehensive training data can help mitigate underfitting. Balancing model complexity is essential, as overly complex models may lead to overfitting, where the model becomes too specialized to the training data and struggles to generalize to new data.

« Back to Glossary Index

allix