Zero-shot learning in AI refers to a paradigm where a machine learning model is trained to recognize or perform tasks for which it has never seen any examples during training. Unlike traditional supervised learning, where models require labeled training data for all possible classes or tasks, zero-shot learning leverages semantic relationships or prior knowledge to make predictions on unseen classes or tasks.
In zero-shot learning, models are equipped with the ability to generalize across domains or categories by learning from related information. This is often achieved through the use of attributes, semantic embeddings, or auxiliary information. For instance, if a model has been trained to recognize different dog breeds and is presented with a breed it has never encountered, it can still make accurate predictions by understanding the shared characteristics and relationships between dog breeds.
Zero-shot learning has applications in scenarios where obtaining labeled data for every class or task is impractical or expensive. It allows models to adapt to new tasks or classes without retraining from scratch. Techniques like transfer learning, knowledge graph embeddings, and generative models play a significant role in zero-shot learning approaches.
« Back to Glossary Index