Loss Function (or Cost Function)

Home Glossary Item Loss Function (or Cost Function)
« Back to Glossary Index

A loss function, also known as a cost function, is a fundamental concept used to quantify how well a predictive model is performing. It computes the discrepancy between the predicted output of the model and the actual or true values. The loss function offers a measure of the accuracy of the prediction: the smaller the loss value, the better the model’s predictive capability, and vice versa. By evaluating this function, we can determine how far off our predictions are from the actual observations.

 

Loss functions come in various forms depending on the task at hand. For example, in regression problems where the goal is to predict a continuous variable, mean squared error (MSE) or mean absolute error (MAE) are common loss functions. They calculate the average of the squared or absolute difference between the predicted and actual values, respectively. Conversely, in classification problems where the aim is to divide the data into specific classes, we often use the cross-entropy loss. This computes the difference between the actual class and the predicted probabilities for those classes.

The optimization of the loss function plays a critical role in machine learning. The goal of training a machine learning model, through methods like gradient descent, is to find the model parameters that minimize the defined loss function. The judicious selection of an appropriate loss function, based on the context and type of problem, is crucial as it directly impacts the quality and performance of the resulting model.

« Back to Glossary Index

allix