Fine-tuning refers to the process of tweaking a pre-trained model for a specific task. It is a form of transfer learning, where a model developed for one task is repurposed on a second related task. It leverages the learnings of a previously trained model and applies them to a new, but similar, problem.
The process typically begins with a model that has been pre-trained on a large dataset, often on a large-scale task such as ImageNet classification. This pre-trained model will have already learned features that are useful for many computer vision tasks, or more generally, many kinds of NLP tasks if the pre-training is done on a large text corpus. The idea is to leverage the learned features, instead of starting the learning process from a random initialization. Then, this model’s parameters are tweaked (or fine-tuned) further on a smaller task-specific dataset.
Fine-tuning is a crucial stage in the machine learning pipeline, especially when the accessibility to labeled data for the task at hand is limited. It imposes a meaningful, task-specific adjustment to the model parameters, thereby allowing the model to adapt to the specifics of the new task. From image recognition to natural language processing, fine-tuning is a proven and effective technique for enhancing model performance with less computational cost.« Back to Glossary Index