util.evaluation.metrics package¶
Submodules¶
util.evaluation.metrics.accuracy module¶
-
util.evaluation.metrics.accuracy.
accuracy
(predicted, target, topk=1)[source]¶ Computes the accuracy@K for the specified values of K
From https://github.com/pytorch/examples/blob/master/imagenet/main.py
- Parameters
predicted (torch.FloatTensor) – The predicted output values of the model. The size is batch_size x num_classes
target (torch.LongTensor) – The ground truth for the corresponding output. The size is batch_size x 1
topk (tuple) – Multiple values for K can be specified in a tuple, and the different accuracies@K will be computed.
- Returns
res – List of accuracies computed at the different K’s specified in topk
- Return type
-
util.evaluation.metrics.accuracy.
accuracy_segmentation
(label_trues, label_preds, n_class)[source]¶ Taken from https://github.com/wkentaro/pytorch-fcn Calculates the accuracy measures for the segmentation runner
- Parameters
label_trues (matrix (batch size x H x W)) – contains the true class labels for each pixel
label_preds (matrix ((batch size x H x W)) – contains the predicted class for each pixel
n_class (int) – number possible classes
border_pixel (boolean) – true if border pixel value should be
- Returns
- Return type
overall accuracy, mean accuracy, mean IU, fwavacc
util.evaluation.metrics.apk module¶
-
util.evaluation.metrics.apk.
apk
(query, predicted, k='full')[source]¶ Computes the average precision@k.
- Parameters
query (int) – Query label.
predicted (List(int)) – Ordered list where each element is a label.
k (str or int) – If int, cutoff for retrieval is set to K If str, ‘full’ means cutoff is til the end of predicted
‘auto’ means cutoff is set to number of relevant queries.
- Example:
query = 0 predicted = [0, 0, 1, 1, 0] if k == ‘full’, then k is set to 5 if k == ‘auto’, then k is set to num of ‘query’ values in ‘predicted’, i.e., k=3 as there as 3 of them in ‘predicted’
- Returns
Average Precision@k
- Return type
-
util.evaluation.metrics.apk.
compute_mapk
(distances, labels, k, workers=None)[source]¶ Convenience function to convert a grid of pairwise distances to predicted elements, to evaluate mean average precision (at K).
- Parameters
distances (ndarray) – A numpy array containing pairwise distances between all elements
labels (list) – Ground truth labels for every element
k (int) – Maximum number of predicted elements
- Returns
float – The mean average precision@K.
dict{label, float} – The per class mean averages precision @k
-
util.evaluation.metrics.apk.
mapk
(query, predicted, k=None, workers=1)[source]¶ Compute the mean Average Precision@K.
- Parameters
query (list) – List of queries.
predicted (list of list, or generator to list of lists) – Predicted responses for each query. Supports chunking with slices in the first dimension.
k (str or int) – If int, cutoff for retrieval is set to k If str, ‘full’ means cutoff is til the end of predicted
‘auto’ means cutoff is set to number of relevant queries.
- For e.g.,
query = 0 predicted = [0, 0, 1, 1, 0] if k == ‘full’, then k is set to 5 if k == ‘auto’, then k is set to num of query values in predicted, i.e., k`=3 as there as 3 of them in `predicted.
workers (int) – Number of parallel workers used to compute the AP@k
- Returns
float – The mean average precision@K.
dict{label, float} – The per class mean averages precision @k