False Negative, a term often used in Statistics and Machine Learning, refers to a specific kind of error that occurs during a binary classification test. In a binary classification scenario, the aim is to sort instances into one of two categories: positive or negative. A False Negative occurs when an instance that is actually positive is incorrectly classified as negative by the given model. This instance is therefore a “False Negative,” as it has falsely been identified as negative.
False Negatives are especially critical in many real-world situations where a failure to correctly recognize a positive instance can have serious consequences. Examples include a medical diagnostic test failing to detect a disease that a patient actually has or a spam filter failing to catch a spam email and sending it to a user’s inbox instead. In both cases, a False Negative could have a significant negative impact – it could delay proper treatment in the case of disease, or expose the user to potential phishing attacks in the case of the spam filter.
Thus, minimizing the rate of False Negatives is often a primary goal when designing, training, and testing predictive models. It is measured using a statistic called a Type II error rate. Reducing False Negatives often comes at the expense of increasing False Positives (identifying a negative instance as positive), so there usually exists a trade-off that data scientists need to balance during model construction and evaluation.