Showing posts with label recall. Show all posts
Showing posts with label recall. Show all posts

Sunday

Precesion Recall and F1 Score

 


Precision is the fraction of predicted positive instances that are actually positive. In other words, it is the number of true positives divided by the number of true positives plus the number of false positives.

Recall is the fraction of actual positive instances that are predicted positive. In other words, it is the number of true positives divided by the number of true positives plus the number of false negatives.

F1 score is the harmonic mean of precision and recall. The harmonic mean is more sensitive to low values than the arithmetic mean, so it gives more weight to the precision and recall scores that are closer to 0.

A perfect model would have a precision and recall of 1, which would give an F1 score of 1. However, in practice, no model is perfect, so the F1 score will always be less than 1.

The F1 score is a more comprehensive measure of model performance than accuracy because it takes both precision and recall into account. Accuracy is only concerned with the number of correct predictions, while precision and recall are also concerned with the number of incorrect predictions.

The F1 score is especially useful for models with an uneven class distribution. In this case, accuracy may not be a reliable measure of performance because the majority class will have a much higher accuracy than the minority class. The F1 score, on the other hand, is not affected by class distribution.

Here is an example to illustrate the difference between accuracy, precision, and recall. Let's say we have a model that classifies images as either cats or dogs. The model correctly classifies 90 out of 100 images, for an accuracy of 90%. However, the model only correctly classifies 80 out of 90 cats, and 10 out of 10 dogs. This means that the precision for cats is 0.8, and the precision for dogs is 1.0. The recall for cats is 0.88, and the recall for dogs is 0.1. The F1 score for the model is 0.84.

In this example, the accuracy is high because the majority class (dogs) has a high accuracy. However, the precision and recall for cats are not as good. The F1 score takes all of these factors into account and provides a more comprehensive measure of model performance.

Photo by Los Muertos Crew