Set of methods to ensemble boxes from different object detection models, including implementation of "Weighted boxes fusion (WBF)" method.

Overview

DOI

Weighted boxes fusion

Repository contains Python implementation of several methods for ensembling boxes from object detection models:

  • Non-maximum Suppression (NMS)
  • Soft-NMS [1]
  • Non-maximum weighted (NMW) [2]
  • Weighted boxes fusion (WBF) [3] - new method which gives better results comparing to others

Requirements

Python 3.*, Numpy, Numba

Installation

pip install ensemble-boxes

Usage examples

Coordinates for boxes expected to be normalized e.g in range [0; 1]. Order: x1, y1, x2, y2.

Example of boxes ensembling for 2 models below.

  • First model predicts 5 boxes, second model predicts 4 boxes.
  • Confidence scores for each box model 1: [0.9, 0.8, 0.2, 0.4, 0.7]
  • Confidence scores for each box model 2: [0.5, 0.8, 0.7, 0.3]
  • Labels (classes) for each box model 1: [0, 1, 0, 1, 1]
  • Labels (classes) for each box model 2: [1, 1, 1, 0]
  • We set weight for 1st model to be 2, and weight for second model to be 1.
  • We set intersection over union for boxes to be match: iou_thr = 0.5
  • We skip boxes with confidence lower than skip_box_thr = 0.0001
from ensemble_boxes import *

boxes_list = [[
    [0.00, 0.51, 0.81, 0.91],
    [0.10, 0.31, 0.71, 0.61],
    [0.01, 0.32, 0.83, 0.93],
    [0.02, 0.53, 0.11, 0.94],
    [0.03, 0.24, 0.12, 0.35],
],[
    [0.04, 0.56, 0.84, 0.92],
    [0.12, 0.33, 0.72, 0.64],
    [0.38, 0.66, 0.79, 0.95],
    [0.08, 0.49, 0.21, 0.89],
]]
scores_list = [[0.9, 0.8, 0.2, 0.4, 0.7], [0.5, 0.8, 0.7, 0.3]]
labels_list = [[0, 1, 0, 1, 1], [1, 1, 1, 0]]
weights = [2, 1]

iou_thr = 0.5
skip_box_thr = 0.0001
sigma = 0.1

boxes, scores, labels = nms(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr)
boxes, scores, labels = soft_nms(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, sigma=sigma, thresh=skip_box_thr)
boxes, scores, labels = non_maximum_weighted(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, skip_box_thr=skip_box_thr)
boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, skip_box_thr=skip_box_thr)

Single model

If you need to apply NMS or any other method to single model predictions you can call function like that:

from ensemble_boxes import *
# Merge boxes for single model predictions
boxes, scores, labels = weighted_boxes_fusion([boxes_list], [scores_list], [labels_list], weights=None, method=method, iou_thr=iou_thr, thresh=thresh)

More examples can be found in example.py

3D version

There is support for 3D boxes in WBF method with weighted_boxes_fusion_3d function. Check example of usage in example_3d.py

Accuracy and speed comparison

Comparison was made for ensemble of 5 different object detection models predictions trained on Open Images Dataset (500 classes).

Model scores at local validation:

  • Model 1: mAP(0.5) 0.5164
  • Model 2: mAP(0.5) 0.5019
  • Model 3: mAP(0.5) 0.5144
  • Model 4: mAP(0.5) 0.5152
  • Model 5: mAP(0.5) 0.4910
Method mAP(0.5) Result Best params Elapsed time (sec)
NMS 0.5642 IOU Thr: 0.5 47
Soft-NMS 0.5616 Sigma: 0.1, Confidence Thr: 0.001 88
NMW 0.5667 IOU Thr: 0.5 171
WBF 0.5982 IOU Thr: 0.6 249

You can download model predictions as well as ground truth labels from here: test_data.zip

Ensemble script for them is available here: example_oid.py

We also published large benchmark based on COCO dataset here.

Description of WBF method and citation

If you find this code useful please cite:

@article{solovyev2021weighted,
  title={Weighted boxes fusion: Ensembling boxes from different object detection models},
  author={Solovyev, Roman and Wang, Weimin and Gabruseva, Tatiana},
  journal={Image and Vision Computing},
  pages={1-6},
  year={2021},
  publisher={Elsevier}
}
Comments
  • Add simple confidence averaging strategy

    Add simple confidence averaging strategy

    Hi, @ZFTurbo. Now, I'm reading your paper and this repo, and I might identify the cause of the issue discussed in #10. I am describing what a cause is in the following sections.

    To fix this issue, I made a PR that proposes another conf_type. Note that this modification can change confidence score, but not any bbox coordinates.

    I'm not an expert, so I may have missed something important. If so, please feel free to point it out. I appreciate it if you review this, and I hope this PR will contribute to the community.

    Problem

    The weighted_boxes_fusion can return confidence score larger than 1.0 regardless of the allows_overflow parameter.

    This is a reproducable example.

    from ensemble_boxes import *
    
    boxes_list = [[
        [0.1, 0.1, 0.2, 0.2],
        [0.1, 0.1, 0.2, 0.2],
        
    ],[
        [0.3, 0.3, 0.4, 0.4],
    ]]
    scores_list = [[1.0, 1.0], [0.5]]
    labels_list = [[0, 0], [0]]
    weights = [2, 1]
    
    iou_thr = 0.5
    skip_box_thr = 0.0001
    sigma = 0.1
    
    boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, skip_box_thr=skip_box_thr, allows_overflow=True)
    print(scores)
    # [1.33333337 0.16666667]
    
    boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, skip_box_thr=skip_box_thr, allows_overflow=False)
    print(scores)
    # [1.33333337 0.16666667]
    

    Why?

    According to the paper^[1], the confidence score is given by the eq.(1) and (5). Combining the two equations, we can get the following equation.

    C = \frac{\sum_i^TC_i}{T} \frac{min(W, T))}{W}.
    

    Since the current implementation has taken into account the weights, we should rewrite the equation. If we expand min(...) part explicitly, the resulting equation looks like this:

    \begin{align*}
    C &= \frac{\sum_i^T w_i C_i}{T} \frac{min(W, T))}{W}, \\
      &= \begin{cases} 
            \frac{\sum_i^T w_i c_i}{T}, & W \le T \\
            \frac{\sum_i^T w_i c_i}{W}, & W > T
         \end{cases}
    \end{align*},
    

    where the w_i is a weight per box, and the W denotes the total weights over the models (weight.sum()).

    The last equation implies that the result is unbounded if w_i's are not unbounded. In the above case, we can calculate C = 2*1 + 2*1 / 3 = 1.33..., (W = 3, T = 2). This is why the result can be larger than 1.0.

    A proposal to fix

    In this PR, I propose the following simple weighted average.

    C = \frac{\sum_i^Tw_iC_i}{\sum_i^T{w_i}} 
    

    Note that the denominator is a weights sum over the boxes of a cluster (not over the models weight.sum()). Apparently, this is bounded in [0, 1] if the input scores are bounded in [0, 1].

    Note that using the normalized weight (to make sure weight.sum == 1) does not fix the issue. If you use normalized weight, the resulting scores will be underestimated by dividing T or W.

    This is a weighted_avg version example.

    boxes_list = [[
        [0.1, 0.1, 0.2, 0.2],
        [0.1, 0.1, 0.2, 0.2],
        
    ],[
        [0.3, 0.3, 0.4, 0.4],
    ]]
    scores_list = [[1.0, 1.0], [0.5]]
    labels_list = [[0, 0], [0]]
    weights = [2, 1]
    
    iou_thr = 0.5
    skip_box_thr = 0.0001
    sigma = 0.1
    
    boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, skip_box_thr=skip_box_thr, allows_overflow=False, conf_type='weighted_avg')
    print(scores)
    # [1.  0.5]
    
    ## use other scores
    scores_list = [[0.8, 1.0], [0.5]]
    boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr, skip_box_thr=skip_box_thr, allows_overflow=False, conf_type='weighted_avg')
    print(scores)
    # [0.89999998 0.5]
    

    Performance

    Sorry, I did not conduct any performance experiments. So, I have no idea how much of an impact this fix will have on performance at this point.

    [1] https://arxiv.org/abs/1910.13302

    opened by i-aki-y 8
  • How to find the optimal hyper-parameters in the ensembling process

    How to find the optimal hyper-parameters in the ensembling process

    Hello, I would like to ask how to find the best parameters combination, such as the model importances, IOU threshold when performing the ensemble process? Now I got prediction results from different models and try to ensemble these predictions, but I cannot know how to find the optimal parameters. Any suggestions will be helpful! Thank you in advance.

    opened by 123dddd 6
  • run NMS on empty bounding box from one of ensembled models

    run NMS on empty bounding box from one of ensembled models

    Hi, I tried to ensemble 5 models. if there is no bounding box detected from one of 5 models, the nms gives error as follow :

    ValueError                                Traceback (most recent call last)
    Input In [39], in <cell line: 14>()
         12 lebar = dict_pred[img.split('\\')[-1].split('.')[0]]['image_width']
         13 tinggi = dict_pred[img.split('\\')[-1].split('.')[0]]['image_height']
    ---> 14 boxes, scores, labels = nms_method(bbox_list, conf_list, cat_list, method=2, weights=weights, iou_thr=0.3, sigma=0.05, thresh=0.001)
    
    File ~\Anaconda3\envs\SiT\lib\site-packages\ensemble_boxes\ensemble_boxes_nms.py:187, in nms_method(boxes, scores, labels, method, iou_thr, sigma, thresh, weights)
        184             scores[i] = (np.array(scores[i]) * weights[i]) / weights.sum()
        186 # We concatenate everything
    --> 187 boxes = np.concatenate(boxes)
        188 scores = np.concatenate(scores)
        189 labels = np.concatenate(labels)
    
    File <__array_function__ internals>:5, in concatenate(*args, **kwargs)
    
    ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 3 has 1 dimension(s)
    

    If I used wbf method, it runs succesfully. is there any solution to run nms in this situation like wbf does. please advise

    thank you regards

    opened by ramdhan1989 5
  • Incorrect allows_overflow=False mode

    Incorrect allows_overflow=False mode

    Example:

    from ensemble_boxes import *
    
    weigths = [0.2, 0.2, 0.2, 0.2, 0.2]
    pred_boxes = []
    pred_scores = []
    pred_labels = []
    for _ in range(5):
        pred_boxes.append([[0.    , 0.    , 0.0001, 0.0001]])
        pred_scores.append([1.])
        pred_labels.append([0])
    
    pred_boxes, pred_scores, pred_labels = weighted_boxes_fusion(
        pred_boxes,
        pred_scores,
        pred_labels,
        weights=weigths,
        iou_thr=0.4,
        skip_box_thr=0.,
        allows_overflow=False
    )
    print(pred_scores)
    

    Actual result: score [0.2] Expected result: score [1]

    Probably we need to change the line weighted_boxes[i][1] = weighted_boxes[i][1] * min(weights.sum(), len(clustered_boxes)) / weights.sum() -> weighted_boxes[i][1] = weighted_boxes[i][1] * min(len(weights), len(clustered_boxes)) / weights.sum()

    opened by Sergey-Zlobin 5
  • Strange behavior of weighted_boxes_fusion

    Strange behavior of weighted_boxes_fusion

    @ZFTurbo

    boxes = [
        [[410, 464, 354, 410],
        [511, 89, 470, 32],
        [503, 213, 447, 166],
        [300, 444, 252, 391],
        [290, 184, 234, 133]],
        
        [[354, 412, 409, 463],
        [447, 165, 504, 212],
        [251, 392, 299, 444],
        [470, 35, 511, 90],
        [187, 316, 240, 378]],
    
        [[447, 166, 503, 213],
        [355, 412, 411, 464],
        [470, 33, 511, 88],
        [251, 391, 300, 444],
        [234, 132, 289, 184]],
    
        [[251, 391, 300, 445],
        [354, 412, 410, 463],
        [447, 166, 503, 212],
        [235, 134, 289, 184],
        [191, 316, 239, 380]],
    
        [[410, 465, 355, 412],
        [511, 88, 470, 33],
        [504, 213, 448, 167],
        [299, 444, 251, 392],
        [289, 185, 236, 133]]
    ]
    
    scores = [
        [0.893,0.886,0.865,0.864,0.801],
        [0.915,0.881,0.873,0.852,0.844],
        [0.896, 0.895,0.860,0.855,0.801],
        [0.900,0.889,0.861,0.838,0.831],
        [0.897,0.877, 0.877,0.861,0.820],
    ]
    
    labels = [
        [1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1]
    ]
    
    wbf_boxes, wbf_scores, wbf_labels = ensemble_boxes.ensemble_boxes_wbf.weighted_boxes_fusion(boxes, scores, labels, weights=None, iou_thr=0.43, skip_box_thr=0.44)
    
    wbf_boxes.shape
    >>> (16, 4)
    

    I think we see excess wbf_boxes, it is very strange. Picture with demonstration:

    Без названия (1)

    Python 3.7.6 numpy==1.18.1 ensemble_boxes==1.0.1

    opened by shonenkov 5
  • Why does my ensemble still have so many repetitive (IoU > thresh I set) bboxes?

    Why does my ensemble still have so many repetitive (IoU > thresh I set) bboxes?

    Thank you for your code.

    However, when I use this method, I have so many repetitive bboxes. Do you have some advices?

    Code is following: boxes, score12, label = example_nms_2_models(boxes_list, labels_list, scores_list, draw_image=False, method=3, iou_thr=0.5, thresh=0.1, sigma=0.5)

    I just use nms method because the limitation of running time. An example of results is following: (Pdb) boxes array([[0.73385417, 0.34259259, 0.81041667, 0.45648148], [0.74930206, 0.36653722, 0.79911662, 0.45000624]]) The labels are same. Obviously, the IoU of these two boxes should be over than 0.5. Looks like NMS doen not work in ensemble. Right?

    opened by RainbowSun11Q2H 5
  • Speed up find_matching_box bottleneck

    Speed up find_matching_box bottleneck

    When using weighted boxes fusion with a couple thousand detections it quickly becomes very slow requiring significantly longer than the inference. The bottleneck turned out to be the find_matching_box function which is called n*n times for n detections. Vectorisation of this function with numpy speeds up the function by a factor of around 100 and makes the weighted boxes fusion time negligible next to the inference time.

    opened by bartonp2 4
  • Strange result if boxes are intersected inside one model

    Strange result if boxes are intersected inside one model

    Example:

    from ensemble_boxes import *
    
    boxes_list = [[
        [0.1, 0.1, 0.2, 0.2],
        [0.1, 0.1, 0.2, 0.2],
    ], [
        [0.3, 0.3, 0.4, 0.4],
    ]]
    scores_list = [[1.0, 1.0], [0.5]]
    labels_list = [[0, 0], [0]]
    weights = [2, 1]
    
    iou_thr = 0.5
    skip_box_thr = 0.0001
    
    boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr,
                                                  skip_box_thr=skip_box_thr, allows_overflow=False
                                                  )
    print(scores)
    

    Actual result: [1.33333337 0.16666667] Expected result: scores are <= 1.

    opened by Sergey-Zlobin 2
  • Sould the

    Sould the "break" change to "continue"

    https://github.com/ZFTurbo/Weighted-Boxes-Fusion/blob/d48402e487dcdf9a5b4e519ed0ef254f233c3fd6/ensemble_boxes/ensemble_boxes_wbf.py#L35

    Hi I do not see why this should break,if one prediction is bad,we should just ignore instead of jumping out of loop unless this list is sorted?

    Thanks!

    opened by EvanAlbee 2
  • Incorrect conf_type='max' mode

    Incorrect conf_type='max' mode

    Example:

    from ensemble_boxes import *
    
    boxes_list = [[
        [0.1, 0.1, 0.2, 0.2],
    ], [
        [0.1, 0.1, 0.2, 0.2],
    ]]
    scores_list = [[1], [1]]
    labels_list = [[0], [0]]
    weights = [1, 3]
    
    iou_thr = 0.5
    skip_box_thr = 0.0001
    
    boxes, scores, labels = weighted_boxes_fusion(boxes_list, scores_list, labels_list, weights=weights, iou_thr=iou_thr,
                                                  skip_box_thr=skip_box_thr, allows_overflow=True, conf_type='max'
                                                  )
    print(scores)
    

    Actual result: [1.5] Expected result: [1]

    opened by Sergey-Zlobin 1
  • I've found typo in your title!

    I've found typo in your title!

    Set of methods to ensemble boxes from object detection models, inlcuding implementation of "Weighted boxes fusion (WBF)" method.

    inlcuding -> including

    Thank you!

    opened by Bluespen 1
  • 2d bbox wbf compute result error

    2d bbox wbf compute result error

    from ensemble_boxes import *
    
    box_list = [[[0.3186783194541931, 0.7567034912109375, 0.3458101451396942, 0.8225753173828125],
                 [0.2893345057964325, 0.7414910888671875, 0.30395767092704773, 0.82741845703125],
                 [0.08234874159097672, 0.7192938232421875, 0.10110706090927124, 0.7978226318359375],
                 [0.03901633247733116, 0.6389908447265625, 0.08065500855445862, 0.794277587890625],
                 [0.10205534100532532, 0.7005220947265625, 0.3009164333343506, 0.8869298095703125],
                 [0.013241510838270187, 0.7410292358398437, 0.4873046875, 0.9892860717773437],
                 [0.20103125274181366, 0.6678619384765625, 0.2735580503940582, 0.8197706298828125],
                 [0.2807173728942871, 0.748069091796875, 0.34884029626846313, 0.8265133056640624]],
                [[499.8333263993263, 0.7574194946289062, 499.8582033813, 0.8213037719726562],
                 [499.80139984190464, 0.744067626953125, 499.8154309540987, 0.83922900390625],
                 [499.79503624141216, 0.7509530639648437, 499.86309084296227, 0.8289489135742187],
                 [499.52070143818855, 0.7752119750976563, 500.0, 0.9840308227539063],
                 [499.5126953125, 0.5068761901855469, 499.98771003447473, 0.74507958984375]],
                [[0.31953710317611694, 0.758130859375, 0.34634509682655334, 0.821462158203125],
                 [0.2890157401561737, 0.7444622192382813, 0.3044307231903076, 0.8266027221679687],
                 [0.008551719598472118, 0.4673032836914062, 0.36901113390922546, 0.7606300659179688],
                 [0.0, 0.4736612548828125, 0.4354751706123352, 0.738594970703125],
                 [0.004452536813914776, 0.7197838134765625, 0.4873046875, 1.0818572998046876],
                 [0.0, 0.7836503295898437, 0.4873046875, 0.9610354614257812],
                 [0.0, 0.4444543762207031, 0.4644381105899811, 0.7153197021484375],
                 [0.0, 0.4595667419433594, 0.45853325724601746, 0.745251220703125]],
                [[0.31933701038360596, 0.7575816040039063, 0.3459422290325165, 0.8217849731445312],
                 [0.2892712652683258, 0.742080322265625, 0.3039073348045349, 0.82669189453125],
                 [0.08082691580057144, 0.7194573364257812, 0.10114574432373047, 0.7929706420898438],
                 [0.014002567157149315, 0.73406103515625, 0.4873046875, 1.049856201171875],
                 [0.0058621931821107864, 0.485712646484375, 0.39766108989715576, 0.765383544921875],
                 [0.0058621931821107864, 0.485712646484375, 0.39766108989715576, 0.765383544921875],
                 [0.0010756775736808777, 0.7905551147460937, 0.4873046875, 0.9734806518554687]],
                [[0.3187095522880554, 0.7571224365234375, 0.34541282057762146, 0.82177685546875],
                 [0.2895776629447937, 0.7409693603515625, 0.3036201298236847, 0.8277974853515625],
                 [0.09533613175153732, 0.6792424926757813, 0.2974565327167511, 0.8962586059570312],
                 [0.08239483088254929, 0.720238525390625, 0.10080346465110779, 0.7970889892578125],
                 [0.03715477138757706, 0.6399656372070313, 0.08126747608184814, 0.7963726196289063],
                 [0.11268552392721176, 0.6827256469726563, 0.12404713779687881, 0.7254093627929687],
                 [0.13349150121212006, 0.6368695678710937, 0.14964361488819122, 0.6750675659179688],
                 [0.15557613968849182, 0.6638201904296875, 0.28306907415390015, 0.858079833984375],
                 [0.01832456886768341, 0.7675958251953126, 0.47474271059036255, 0.944915283203125],
                 [0.4468492865562439, 0.6279349975585937, 0.4620358347892761, 0.6685007934570313]],
                [[0.3194415271282196, 0.7588555908203125, 0.3447743058204651, 0.8150806884765625],
                 [0.0741792619228363, 0.6543595581054688, 0.3116285800933838, 0.9023289184570312],
                 [0.08234858512878418, 0.7222454223632813, 0.10103598982095718, 0.7950789184570313],
                 [0.13459675014019012, 0.6382381591796875, 0.1480436772108078, 0.6662626953125],
                 [0.28921717405319214, 0.738470703125, 0.3036784827709198, 0.8197808837890626],
                 [0.007326365448534489, 0.7826550903320313, 0.4597592055797577, 0.9532874145507813],
                 [0.0, 0.6689840087890625, 0.0, 0.70613916015625]]]
    
    score_list = [[0.83423, 0.79483235, 0.6388512, 0.18261029, 0.14935824, 0.088351235, 0.08091657, 0.07652432],
                  [0.879186, 0.60111153, 0.14815152, 0.14187415, 0.063758574],
                  [0.5681963, 0.4344975, 0.31340924, 0.21414337, 0.14648718, 0.07502777, 0.06623978, 0.06619557],
                  [0.7660579, 0.6813378, 0.24239238, 0.11751874, 0.11032326, 0.10366057, 0.074862376],
                  [0.8430377, 0.82931787, 0.7683554, 0.6823021, 0.28933194, 0.18235974, 0.17097037, 0.09869627, 0.08994506, 0.079594925],
                  [0.78075427, 0.7309819, 0.6295794, 0.48739368, 0.39831838, 0.073874675, 0.059466254]]
    
    label_list = [[6, 6, 6, 7, 7, 9, 7, 6],
                  [6, 6, 6, 9, 11],
                  [6, 6, 13, 11, 13, 9, 8, 0],
                  [6, 6, 6, 13, 11, 13, 9],
                  [6, 6, 7, 6, 7, 6, 6, 7, 9, 6],
                  [6, 7, 6, 6, 6, 9, 6]]
    
    weights = [1, 1, 1, 1, 1, 1]
    box_list, score_list, label_list = weighted_boxes_fusion(box_list, score_list, label_list,
                                                             weights=weights, iou_thr=0.5,
                                                             skip_box_thr=0.1)
    print(box_list)
    print(score_list)
    print(label_list)
    

    and the result is:

    [[0.31910411 0.75763094 0.34561539 0.82052845]
    [0.28932601 0.7415092  0.30388758 0.82627851]
    [0.08219483 0.72045314 0.10099649 0.79627055]
    [0.08656451 0.67013788 0.3040534  0.89810479]
    [0.13430972 0.63788271 0.14845915 0.66854924]
    [0.03787507 0.63958848 0.0810305  0.79556197]
    [0.00788325 0.47187883 0.37613192 0.76181149]
    [0.00199323 0.47775888 0.42261785 0.74770349]
    [0.00870361 0.72613913 0.48730466 1.        ]
    [0.11268552 0.68272565 0.12404714 0.72540936]]
    [0.63204601 0.52305063 0.36552083 0.2747826  0.10972734 0.07865704
    0.06951164 0.05407777 0.04400099 0.03039329]
    [ 6.  6.  6.  7.  6.  7. 13. 11. 13.  6.]
    

    The low confidence box that lower than 0.1 (skip_box_thr) not be filtrated.

    opened by VariableXX 0
  • Single instance NMS

    Single instance NMS

    I'm after NMS over classes, but by default most implementations just do within class. Can't work out if over classes is implemented by this tool / repo?

    See yolo discussion https://github.com/ultralytics/yolov5/issues/2162

    (struggling to work with that implementation, not obvious what the format of the input predictions is)

    opened by GeorgePearse 0
  • label list on SSD

    label list on SSD

    Hello, I'm building an ensemble object detection model with two SSD head detectors. In this implementation the labels of the predicted model are stored with the corresponding conf scores and the bbox coordinates.

    In SSD, though, the model doesn't produce the corresponding label during the detection.

    Is there any way to make sense this implementation when using SSD as the detector or does anyone have any input on this matter? Looking forward for sharing!

    opened by ksmdnl 1
  • Building wheel for llvmlite ... error

    Building wheel for llvmlite ... error

    hi there, maybe this is not a issue but an error that I faced when I install ensemble_boxes with the following command:

    pip install ensemble_boxes

    My environment is Jetson Xavier:

    • Ubuntu:18.04 LTS
    • ARMv8 Processor rev 0 (v8l) × 6
    • NVIDIA Tegra Xavier (nvgpu)/integrated
    • Conda 4.10.3

    To reproduce: 1 create a conda environment named pytorch1 2 install pytorch via the following link https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048 In my case, I used the python 3.6 and pytorch v1.10.0 3 After successful installed use 'pip install ensemble_boxes'

    4 No error if I use python 3.8 and with cpu version pytorch.

    Error occurs:

    (pytorch1) [email protected]:~/Github/$ pip install ensemble_boxes Collecting ensemble_boxes Using cached ensemble_boxes-1.0.8-py3-none-any.whl (21 kB) Collecting numba Using cached numba-0.53.1-cp36-cp36m-linux_aarch64.whl Requirement already satisfied: numpy in /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages (from ensemble_boxes) (1.19.5) Requirement already satisfied: pandas in /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages (from ensemble_boxes) (1.1.5) Collecting llvmlite<0.37,>=0.36.0rc1 Using cached llvmlite-0.36.0.tar.gz (126 kB) Preparing metadata (setup.py) ... done Requirement already satisfied: setuptools in /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages (from numba->ensemble_boxes) (59.5.0) Requirement already satisfied: python-dateutil>=2.7.3 in /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages (from pandas->ensemble_boxes) (2.8.2) Requirement already satisfied: pytz>=2017.2 in /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages (from pandas->ensemble_boxes) (2021.3) Requirement already satisfied: six>=1.5 in /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages (from python-dateutil>=2.7.3->pandas->ensemble_boxes) (1.16.0) Building wheels for collected packages: llvmlite Building wheel for llvmlite (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/user/anaconda3/envs/pytorch1/bin/python3.6 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/setup.py'"'"'; file='"'"'/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-lykc90s6 cwd: /tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ Complete output (11 lines): running bdist_wheel /home/user/anaconda3/envs/pytorch1/bin/python3.6 /tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py LLVM version... Traceback (most recent call last): File "/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py", line 220, in main() File "/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py", line 210, in main main_posix('linux', '.so') File "/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py", line 134, in main_posix raise RuntimeError(msg) from None RuntimeError: Could not find a llvm-config binary. There are a number of reasons this could occur, please see: https://llvmlite.readthedocs.io/en/latest/admin-guide/install.html#using-pip for help. error: command '/home/user/anaconda3/envs/pytorch1/bin/python3.6' failed with exit status 1

    ERROR: Failed building wheel for llvmlite Running setup.py clean for llvmlite Failed to build llvmlite Installing collected packages: llvmlite, numba, ensemble-boxes Running setup.py install for llvmlite ... error ERROR: Command errored out with exit status 1: command: /home/user/anaconda3/envs/pytorch1/bin/python3.6 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/setup.py'"'"'; file='"'"'/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-hho726u9/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/anaconda3/envs/pytorch1/include/python3.6m/llvmlite cwd: /tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ Complete output (16 lines): running install /home/user/anaconda3/envs/pytorch1/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, running build got version from file /tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/llvmlite/_version.py {'version': '0.36.0', 'full': 'e6bb8d137d922bec8beeb01a237254778759becd'} running build_ext /home/user/anaconda3/envs/pytorch1/bin/python3.6 /tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py LLVM version... Traceback (most recent call last): File "/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py", line 220, in main() File "/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py", line 210, in main main_posix('linux', '.so') File "/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/ffi/build.py", line 134, in main_posix raise RuntimeError(msg) from None RuntimeError: Could not find a llvm-config binary. There are a number of reasons this could occur, please see: https://llvmlite.readthedocs.io/en/latest/admin-guide/install.html#using-pip for help. error: command '/home/user/anaconda3/envs/pytorch1/bin/python3.6' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /home/user/anaconda3/envs/pytorch1/bin/python3.6 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/setup.py'"'"'; file='"'"'/tmp/pip-install-v9ddsi1d/llvmlite_c662f38a09a547d3aac9a4c30baebe97/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-hho726u9/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/anaconda3/envs/pytorch1/include/python3.6m/llvmlite Check the logs for full command output.

    opened by jeffenhuang 1
  • Compatibility for Instance Segmentation

    Compatibility for Instance Segmentation

    I'm not actually suggesting to do wbf or nms on masks but I think those methods should be compatible with masks for instance segmentation. I haven't tried any methods in this repository for instance segmentation yet but I think it makes sense to get more refined masks with wbf and nms.

    I'm currently doing this by adding masks argument to prepare_boxes and nms_method functions.

    def prepare_boxes(boxes, scores, labels, masks=None):
        result_boxes = boxes.copy()
    
        cond = (result_boxes < 0)
        cond_sum = cond.astype(np.int32).sum()
        if cond_sum > 0:
            print('Warning. Fixed {} boxes coordinates < 0'.format(cond_sum))
            result_boxes[cond] = 0
    
        cond = (result_boxes > 1)
        cond_sum = cond.astype(np.int32).sum()
        if cond_sum > 0:
            print('Warning. Fixed {} boxes coordinates > 1. Check that your boxes was normalized at [0, 1]'.format(cond_sum))
            result_boxes[cond] = 1
    
        boxes1 = result_boxes.copy()
        result_boxes[:, 0] = np.min(boxes1[:, [0, 2]], axis=1)
        result_boxes[:, 2] = np.max(boxes1[:, [0, 2]], axis=1)
        result_boxes[:, 1] = np.min(boxes1[:, [1, 3]], axis=1)
        result_boxes[:, 3] = np.max(boxes1[:, [1, 3]], axis=1)
    
        area = (result_boxes[:, 2] - result_boxes[:, 0]) * (result_boxes[:, 3] - result_boxes[:, 1])
        cond = (area == 0)
        cond_sum = cond.astype(np.int32).sum()
        if cond_sum > 0:
            print('Warning. Removed {} boxes with zero area!'.format(cond_sum))
            result_boxes = result_boxes[area > 0]
            scores = scores[area > 0]
            labels = labels[area > 0]
            if masks is not None:
                masks = masks[area > 0]
    
        return result_boxes, scores, labels, masks
    
    def nms_method(boxes, scores, labels, masks=None, method=3, iou_thr=0.5, sigma=0.5, thresh=0.001, weights=None):
        """
        :param boxes: list of boxes predictions from each model, each box is 4 numbers. 
        It has 3 dimensions (models_number, model_preds, 4)
        Order of boxes: x1, y1, x2, y2. We expect float normalized coordinates [0; 1] 
        :param scores: list of scores for each model 
        :param labels: list of labels for each model
        :param method: 1 - linear soft-NMS, 2 - gaussian soft-NMS, 3 - standard NMS
        :param iou_thr: IoU value for boxes to be a match 
        :param sigma: Sigma value for SoftNMS
        :param thresh: threshold for boxes to keep (important for SoftNMS)
        :param weights: list of weights for each model. Default: None, which means weight == 1 for each model
    
        :return: boxes: boxes coordinates (Order of boxes: x1, y1, x2, y2). 
        :return: scores: confidence scores
        :return: labels: boxes labels
        """
    
        # If weights are specified
        if weights is not None:
            if len(boxes) != len(weights):
                print('Incorrect number of weights: {}. Must be: {}. Skip it'.format(len(weights), len(boxes)))
            else:
                weights = np.array(weights)
                for i in range(len(weights)):
                    scores[i] = (np.array(scores[i]) * weights[i]) / weights.sum()
    
        # We concatenate everything
        boxes = np.concatenate(boxes)
        scores = np.concatenate(scores)
        labels = np.concatenate(labels)
        if masks is not None:
            masks = np.concatenate(masks)
    
        # Fix coordinates and removed zero area boxes
        boxes, scores, labels, masks = prepare_boxes(boxes, scores, labels, masks)
    
        # Run NMS independently for each label
        unique_labels = np.unique(labels)
        final_boxes = []
        final_scores = []
        final_labels = []
        if masks is not None:
            final_masks = []
    
        for l in unique_labels:
            condition = (labels == l)
            boxes_by_label = boxes[condition]
            scores_by_label = scores[condition]
            labels_by_label = np.array([l] * len(boxes_by_label))
            if masks is not None:
                masks_by_label = masks[condition]
    
            if method != 3:
                keep = cpu_soft_nms_float(boxes_by_label.copy(), scores_by_label.copy(), Nt=iou_thr, sigma=sigma, thresh=thresh, method=method)
            else:
                # Use faster function
                keep = nms_float_fast(boxes_by_label, scores_by_label, thresh=iou_thr)
    
            final_boxes.append(boxes_by_label[keep])
            final_scores.append(scores_by_label[keep])
            final_labels.append(labels_by_label[keep])
            if masks is not None:
                final_masks.append(masks_by_label[keep])
        final_boxes = np.concatenate(final_boxes)
        final_scores = np.concatenate(final_scores)
        final_labels = np.concatenate(final_labels)
        final_masks = np.concatenate(final_masks)
    
        return final_boxes, final_scores, final_labels, final_masks
    
    def nms(boxes, scores, labels, masks=None, iou_thr=0.5, weights=None):
        """
        Short call for standard NMS 
        
        :param boxes: 
        :param scores: 
        :param labels: 
        :param iou_thr: 
        :param weights: 
        :return: 
        """
        return nms_method(boxes, scores, labels, masks, method=3, iou_thr=iou_thr, weights=weights)
    
    
    def soft_nms(boxes, scores, labels, masks=None, method=2, iou_thr=0.5, sigma=0.5, thresh=0.001, weights=None):
        """
        Short call for Soft-NMS
         
        :param boxes: 
        :param scores: 
        :param labels: 
        :param method: 
        :param iou_thr: 
        :param sigma: 
        :param thresh: 
        :param weights: 
        :return: 
        """
        return nms_method(boxes, scores, labels, masks, method=method, iou_thr=iou_thr, sigma=sigma, thresh=thresh, weights=weights)
    

    I know it looks like error prone with this fast work around but do you think this feature deserves a pull request? @ZFTurbo

    opened by gunesevitan 1
Releases(v1.0.8)
Official Implementation of "Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras"

Multi Camera Pig Tracking Official Implementation of Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras CVPR2021 CV4Animals Workshop P

44 Jan 06, 2023
Simple tools for logging and visualizing, loading and training

TNT TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is desi

1.5k Jan 02, 2023
Code for Paper: Self-supervised Learning of Motion Capture

Self-supervised Learning of Motion Capture This is code for the paper: Hsiao-Yu Fish Tung, Hsiao-Wei Tung, Ersin Yumer, Katerina Fragkiadaki, Self-sup

Hsiao-Yu Fish Tung 87 Jul 25, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
MASS (Mueen's Algorithm for Similarity Search) - a python 2 and 3 compatible library used for searching time series sub-sequences under z-normalized Euclidean distance for similarity.

Introduction MASS allows you to search a time series for a subquery resulting in an array of distances. These array of distances enable you to identif

Matrix Profile Foundation 79 Dec 31, 2022
PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods.

PEPit: Performance Estimation in Python This open source Python library provides a generic way to use PEP framework in Python. Performance estimation

Baptiste 53 Nov 16, 2022
Hub is a dataset format with a simple API for creating, storing, and collaborating on AI datasets of any size.

Hub is a dataset format with a simple API for creating, storing, and collaborating on AI datasets of any size. The hub data layout enables rapid transformations and streaming of data while training m

Activeloop 5.1k Jan 08, 2023
Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework

Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework

Google Cloud Platform 792 Dec 28, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
TensorFlow (Python) implementation of DeepTCN model for multivariate time series forecasting.

DeepTCN TensorFlow TensorFlow (Python) implementation of multivariate time series forecasting model introduced in Chen, Y., Kang, Y., Chen, Y., & Wang

Flavia Giammarino 21 Dec 19, 2022
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
Veri Setinizi Yolov5 Formatına Dönüştürün

Veri Setinizi Yolov5 Formatına Dönüştürün! Bu Repo da Neler Var? Xml Formatındaki Veri Setini .Txt Formatına Çevirme Xml Formatındaki Dosyaları Silme

Kadir Nar 4 Aug 22, 2022
An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astronomy data.

EquivariantSelfAttention An implementation of Equivariant e2 convolutional kernals into a convolutional self attention network, applied to radio astro

2 Nov 09, 2021
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 08, 2023
TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning

TransZero++ This repository contains the testing code for the paper "TransZero++: Cross Attribute-guided Transformer for Zero-Shot Learning" submitted

Shiming Chen 6 Aug 16, 2022
TumorInsight is a Brain Tumor Detection and Classification model built using RESNET50 architecture.

A Brain Tumor Detection and Classification Model built using RESNET50 architecture. The model is also deployed as a web application using Flask framework.

Pranav Khurana 0 Aug 17, 2021
T2F: text to face generation using Deep Learning

⭐ [NEW] ⭐ T2F - 2.0 Teaser (coming soon ...) Please note that all the faces in the above samples are generated ones. The T2F 2.0 will be using MSG-GAN

Animesh Karnewar 533 Dec 22, 2022
Determined: Deep Learning Training Platform

Determined: Deep Learning Training Platform Determined is an open-source deep learning training platform that makes building models fast and easy. Det

Determined AI 2k Dec 31, 2022
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data

Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data This is the official PyTorch implementation of the SeCo paper: @articl

ElementAI 101 Dec 12, 2022