Accuracy
Accuracy is the fraction of predictions our model got right. It is defined as:Area Under ROC Curve (AUC-ROC)
The Area Under the Receiver Operating Characteristic Curve (AUC-ROC) is a performance measurement for classification problems. The ROC curve is a plot of true positive rate (recall) against the false positive rate (1 - specificity). The AUC represents the degree or measure of separability, and it tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model. It is defined as: where:- (TPR) is the true positive rate (recall),
- (f) is the false positive rate.
Cross-Entropy Loss
Cross-Entropy Loss is a measure used to quantify the difference between two probability distributions for a given random variable or set of events. It is defined as:Explained Variance Score
The Explained Variance Score measures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. It is defined as: where:- (\text(y - \hat)) is the variance of the errors,
- (\text(y)) is the variance of the actual values.
F1 Score
The F1 score is a harmonic mean of precision and recall, providing a balance between them. It is calculated as: where: precision recallIoU (Intersection over Union) for Object Detection
IoU is a measure of the overlap between two bounding boxes. It is defined as:mAP (Mean Average Precision)
Mean Average Precision (mAP) is a common metric used to evaluate object detection models. It summarizes the precision-recall curve for different classes. It is calculated as: where:- (N) is the number of classes,
- (AP_i) is the Average Precision for class (i).
- mAP@[IoU=50]: mAP at 50% IoU threshold.
- mAP@[IoU=75]: mAP at 75% IoU threshold.
- mAP@[area=small]: mAP for small objects.
- mAP@[area=medium]: mAP for medium objects.
- mAP@[area=large]: mAP for large objects.
Mean Absolute Error (MAE)
Mean Absolute Error (MAE) measures the average magnitude of the errors in a set of predictions, without considering their direction. It is calculated as: where:- (n) is the number of data points,
- (y_i) is the actual value,
- (\hat_i) is the predicted value.
Mean Squared Error (MSE)
Mean Squared Error (MSE) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. It is calculated as: where:- (n) is the number of data points,
- (y_i) is the actual value,
- (\hat_i) is the predicted value.
Precision
Precision indicates the accuracy of positive predictions. It is defined as: where:- (TP) is the number of true positives,
- (FP) is the number of false positives,
- (FN) is the number of false negatives.
Recall
Recall measures the ability of a model to find all relevant cases within a dataset. It is defined as: where:- (TP) is the number of true positives,
- (FP) is the number of false positives,
- (FN) is the number of false negatives.
- Recall@[max_detections=1]: Recall when considering only the top 1 detection per image.
- Recall@[max_detections=10]: Recall when considering the top 10 detections per image.
- Recall@[max_detections=100]: Recall when considering the top 100 detections per image.
- Recall@[area=small]: Recall for small objects.
- Recall@[area=medium]: Recall for medium objects.
- Recall@[area=large]: Recall for large objects.
Sigmoid Function
The Sigmoid function is used for binary classification in logistic regression models. It is defined as:Softmax Function
The Softmax function is used for multi-class classification. It converts logits to probabilities that sum to 1. It is defined for class (j) as:Weighted Average F1 Score
Weighted Average F1 Score takes into account the F1 score of each class and the number of instances for each class. It is defined as: where:- (n) is the number of classes,
- (TP_i) is the true positives for class (i),
- (FN_i) is the false negatives for class (i),
- (TP) is the total number of true positives,
- (FN) is the total number of false negatives,
- (F1_i) is the F1 score for class (i).
Weighted Average Precision
Weighted Average Precision takes into account the precision of each class and the number of instances for each class. It is defined as: where:- (n) is the number of classes,
- (TP_i) is the true positives for class (i),
- (FN_i) is the false negatives for class (i),
- (TP) is the total number of true positives,
- (FN) is the total number of false negatives,
- (Precision_i) is the precision for class (i).
Weighted Average Recall
Weighted Average Recall takes into account the recall of each class and the number of instances for each class. It is defined as: where:- (n) is the number of classes,
- (TP_i) is the true positives for class (i),
- (FN_i) is the false negatives for class (i),
- (TP) is the total number of true positives,
- (FN) is the total number of false negatives,
- (Recall_i) is the recall for class (i).