YOLOv7 - Framework Beyond Detection

Overview

YOLOv7 - Framework Beyond Detection


DocumentationInstallation InstructionsDeploymentContributingReporting Issues


PyPI - Python Version PyPI version PyPI downloads Github downloads

codecov license Slack PRs Welcome


This is the first and only (for now) YOLO family variant with transformers! and more advanced YOLO with multi-tasking such as detect & segmentation at the same time!

🔥 🔥 🔥 Just another yolo variant implemented based on detectron2. Be note that YOLOv7 doesn't meant to be a successor of yolo family, 7 is just my magic and lucky number. In our humble opinion, a good opensource project must have these features:

  • It must be reproduceble;
  • It must be simple and understandable;
  • It must be build with the weapon of the edge;
  • It must have a good maintainance, listen to the voice from community;

However, we found many opensource detection framework such as YOLOv5, Efficientdet have their own weakness, for example, YOLOv5 is very good at reproduceable but really over-engineered, too many messy codes. What's more surprisingly, there were at least 20+ different version of re-implementation of YOLOv3-YOLOv4 in pytorch, 99.99% of them were totally wrong, either can u train your dataset nor make it mAP comparable with origin paper.(However, doesn't mean this work is totally right, use at your own risk.)

That's why we have this project! It's much more simpler to experiment different ARCH of YOLO build upon detectron2 with YOLOv7! Most importantly, more and more decent YOLO series model merged into this repo such as YOLOX (most decent in 2021). We also welcome any trick/experiment PR on YOLOv7, help us build it better and stronger!!. Please star it and fork it right now!.

The supported matrix in YOLOv7 are:

  • YOLOv4 contained with CSP-Darknet53;
  • YOLOv7 arch with resnets backbone;
  • YOLOv7 arch with resnet-vd backbone (likely as PP-YOLO), deformable conv, Mish etc;
  • GridMask augmentation from PP-YOLO included;
  • Mosiac transform supported with a custom datasetmapper;
  • YOLOv7 arch Swin-Transformer support (higher accuracy but lower speed);
  • YOLOv7 arch Efficientnet + BiFPN;
  • YOLOv5 style positive samples selection, new coordinates coding style;
  • RandomColorDistortion, RandomExpand, RandomCrop, RandomFlip;
  • CIoU loss (DIoU, GIoU) and label smoothing (from YOLOv5 & YOLOv4);
  • YOLOF also included;
  • YOLOv7 Res2net + FPN supported;
  • Pyramid Vision Transformer v2 (PVTv2) supported;
  • WBF (Weighted Box Fusion), this works better than NMS, link;
  • YOLOX like head design and anchor design, also training support;
  • YOLOX s,m,l backbone and PAFPN added, we have a new combination of YOLOX backbone and pafpn;
  • YOLOv7 with Res2Net-v1d backbone, we found res2net-v1d have a better accuracy then darknet53;
  • Added PPYOLOv2 PAN neck with SPP and dropblock;
  • YOLOX arch added, now you can train YOLOX model (anchor free yolo) as well;
  • DETR: transformer based detection model and onnx export supported, as well as TensorRT acceleration;
  • AnchorDETR: Faster converge version of detr, now supported!

what's more, there are some features awesome inside repo:

  • Almost all models can export to onnx;
  • Supports TensorRT deployment for DETR and other transformer models;
  • It will integrate with wanwu, a torch-free deploy framework run fastest on your target platform.

💁‍♂️ Results

YOLOv7 Instance Face & Detection

🤔 Features

Some highlights of YOLOv7 are:

  • A simple and standard training framework for any detection && instance segmentation tasks, based on detectron2;
  • Supports DETR and many transformer based detection framework out-of-box;
  • Supports easy to deploy pipeline thought onnx.
  • This is the only framework support YOLOv4 + InstanceSegmentation in single stage style;
  • Easily plugin into transformers based detector;

We are strongly recommend you send PR if you have any further development on this project, the only reason for opensource it is just for using community power to make it stronger and further. It's very welcome for anyone contribute on any features!

🙌 Tasks Want Community to Finish

  • train a hand detection model, with YOLOX;
  • add more instance segmentation models into YOLOv7;

😎 Rules

There are some rules you must follow to if you want train on your own dataset:

  • Rule No.1: Always set your own anchors on your dataset, using tools/compute_anchors.py, this applys to any other anchor-based detection methods as well (EfficientDet etc.);
  • Rule No.2: Keep a faith on your loss will goes down eventually, if not, dig deeper to find out why (but do not post issues repeated caused I might don't know either.).
  • Rule No.3: No one will tells u but it's real: do not change backbone easily, whole params coupled with your backbone, dont think its simple as you think it should be, also a Deeplearning engineer is not an easy work as you think, the whole knowledge like an ocean, and your knowledge is just a tiny drop of water...
  • Rule No.4: must using pretrain weights for transoformer based backbone, otherwise your loss will bump;

Make sure you have read rules before ask me any questions.

🆕 News!

  • 2021.09.16: First transformer based DETR model added, will explore more DETR series models;
  • 2021.08.02: YOLOX arch added, you can train YOLOX as well in this repo;
  • 2021.07.25: We found YOLOv7-Res2net50 beat res50 and darknet53 at same speed level! 5% AP boost on custom dataset;
  • 2021.07.04: Added YOLOF and we can have a anchor free support as well, YOLOF achieves a better trade off on speed and accuracy;
  • 2021.06.25: this project first started.
  • more

🧑‍🦯 Installation && Quick Start

😎 Train

For training, quit simple, same as detectron2:

python train_net.py --config-file configs/coco/darknet53.yaml --num-gpus 8

If you want train YOLOX, you can using config file configs/coco/yolox_s.yaml. All support arch are:

  • YOLOX: anchor free yolo;
  • YOLOv7: traditional yolo with some explorations, mainly focus on loss experiments;
  • YOLOv7P: traditional yolo merged with decent arch from YOLOX;
  • YOLOMask: arch do detection and segmentation at the same time (tbd);
  • YOLOInsSeg: instance segmentation based on YOLO detection (tbd);

🥰 Demo

Run a quick demo would be like:

python3 demo.py --config-file configs/wearmask/darknet53.yaml --input ./datasets/wearmask/images/val2017 --opts MODEL.WEIGHTS output/model_0009999.pth

an update based on detectron2 newly introduced LazyConfig system, run with a LazyConfig model using:

python3 demo_lazyconfig.py --config-file configs/new_baselines/panoptic_fpn_regnetx_0.4g.py --opts train.init_checkpoint=output/model_0004999.pth

Export ONNX && TensorRTT && TVM

  1. detr:
python export_onnx.py --config-file detr/config/file

this works has been done, inference script included inside tools.

  1. AnchorDETR:

anchorDETR also supported training and exporting to ONNX.

More Advanced YOLO

Here we show some highlights on multi-tasking:

Performance

Here is a dedicated performance compare with other packages.

Some Tiny Object Datasets supported

Detection Results

Some Exp Visualizations

  1. GridMask

    Our GridMask augmentation also supports 2 modes.

  2. Mosaic

    Our Mosaic support any size and any any image numbers!

    new: we merged another mosiac implementation from YOLOX, this version will do random pespective:

License

Code released under GPL license. Please pull request to this source repo before you make your changes public or commercial usage. All rights reserved by Lucas Jin.

Comments
  • AssertionError: Invalid REFERENCE_WORLD_SIZE in config!

    AssertionError: Invalid REFERENCE_WORLD_SIZE in config!

    File "//python3.7/site-packages/detectron2/engine/defaults.py", line 674, in auto_scale_workers ), "Invalid REFERENCE_WORLD_SIZE in config!" AssertionError: Invalid REFERENCE_WORLD_SIZE in config!

    opened by zkailinzhang 10
  • Error in demo.py (I think I know the solution)

    Error in demo.py (I think I know the solution)

    Hi, Thanks for this repo :-D

    When I run the demo.py I get the following error:

    python3 demo.py --output /home/ws/Desktop/ --config-file configs/coco-instance/yolomask.yaml --input /home/ws/data/dataset/nets_kinneret_only24_2/images/0_2.jpg --opts MODEL.WEIGHTS output/coco_yolomask/model_final.pth
    Install mish-cuda to speed up training and inference. More importantly, replace the naive Mish with MishCuda will give a ~1.5G memory saving during training.
    [05/08 10:45:50 detectron2]: Arguments: Namespace(confidence_threshold=0.21, config_file='configs/coco-instance/yolomask.yaml', input='/home/ws/data/dataset/nets_kinneret_only24_2/images/0_2.jpg', nms_threshold=0.6, opts=['MODEL.WEIGHTS', 'output/coco_yolomask/model_final.pth'], output='/home/ws/Desktop/', video_input=None, webcam=False)
    10:45:50 05.08 INFO yolomask.py:86]: YOLO.ANCHORS: [[142, 110], [192, 243], [459, 401], [36, 75], [76, 55], [72, 146], [12, 16], [19, 36], [40, 28]]
    10:45:50 05.08 INFO yolomask.py:90]: backboneshape: [64, 128, 256, 512], size_divisibility: 32
    [[142, 110], [192, 243], [459, 401], [36, 75], [76, 55], [72, 146], [12, 16], [19, 36], [40, 28]]
    /home/ws/.local/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
      return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
    [05/08 10:45:52 fvcore.common.checkpoint]: [Checkpointer] Loading from output/coco_yolomask/model_final.pth ...
    [05/08 10:45:53 d2.checkpoint.c2_model_loading]: Following weights matched with model:
    | Names in Model                                    | Names in Checkpoint
                                        | Shapes                         |
    |:--------------------------------------------------|:------------------------------------------------------------------------------------------------------|:-------------------------------|
    | backbone.dark2.0.bn.*                             | backbone.dark2.0.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (64,) () (64,) (64,) (64,)     |
    | backbone.dark2.0.conv.weight                      | backbone.dark2.0.conv.weight
                                        | (64, 32, 3, 3)                 |
    | backbone.dark2.1.conv1.bn.*                       | backbone.dark2.1.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (32,) () (32,) (32,) (32,)     |
    | backbone.dark2.1.conv1.conv.weight                | backbone.dark2.1.conv1.conv.weight
                                        | (32, 64, 1, 1)                 |
    | backbone.dark2.1.conv2.bn.*                       | backbone.dark2.1.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (32,) () (32,) (32,) (32,)     |
    | backbone.dark2.1.conv2.conv.weight                | backbone.dark2.1.conv2.conv.weight
                                        | (32, 64, 1, 1)                 |
    | backbone.dark2.1.conv3.bn.*                       | backbone.dark2.1.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (64,) () (64,) (64,) (64,)     |
    | backbone.dark2.1.conv3.conv.weight                | backbone.dark2.1.conv3.conv.weight
                                        | (64, 64, 1, 1)                 |
    | backbone.dark2.1.m.0.conv1.bn.*                   | backbone.dark2.1.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (32,) () (32,) (32,) (32,)     |
    | backbone.dark2.1.m.0.conv1.conv.weight            | backbone.dark2.1.m.0.conv1.conv.weight
                                        | (32, 32, 1, 1)                 |
    | backbone.dark2.1.m.0.conv2.bn.*                   | backbone.dark2.1.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (32,) () (32,) (32,) (32,)     |
    | backbone.dark2.1.m.0.conv2.conv.weight            | backbone.dark2.1.m.0.conv2.conv.weight
                                        | (32, 32, 3, 3)                 |
    | backbone.dark3.0.bn.*                             | backbone.dark3.0.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (128,) () (128,) (128,) (128,) |
    | backbone.dark3.0.conv.weight                      | backbone.dark3.0.conv.weight
                                        | (128, 64, 3, 3)                |
    | backbone.dark3.1.conv1.bn.*                       | backbone.dark3.1.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.conv1.conv.weight                | backbone.dark3.1.conv1.conv.weight
                                        | (64, 128, 1, 1)                |
    | backbone.dark3.1.conv2.bn.*                       | backbone.dark3.1.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.conv2.conv.weight                | backbone.dark3.1.conv2.conv.weight
                                        | (64, 128, 1, 1)                |
    | backbone.dark3.1.conv3.bn.*                       | backbone.dark3.1.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (128,) () (128,) (128,) (128,) |
    | backbone.dark3.1.conv3.conv.weight                | backbone.dark3.1.conv3.conv.weight
                                        | (128, 128, 1, 1)               |
    | backbone.dark3.1.m.0.conv1.bn.*                   | backbone.dark3.1.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.m.0.conv1.conv.weight            | backbone.dark3.1.m.0.conv1.conv.weight
                                        | (64, 64, 1, 1)                 |
    | backbone.dark3.1.m.0.conv2.bn.*                   | backbone.dark3.1.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.m.0.conv2.conv.weight            | backbone.dark3.1.m.0.conv2.conv.weight
                                        | (64, 64, 3, 3)                 |
    | backbone.dark3.1.m.1.conv1.bn.*                   | backbone.dark3.1.m.1.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.m.1.conv1.conv.weight            | backbone.dark3.1.m.1.conv1.conv.weight
                                        | (64, 64, 1, 1)                 |
    | backbone.dark3.1.m.1.conv2.bn.*                   | backbone.dark3.1.m.1.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.m.1.conv2.conv.weight            | backbone.dark3.1.m.1.conv2.conv.weight
                                        | (64, 64, 3, 3)                 |
    | backbone.dark3.1.m.2.conv1.bn.*                   | backbone.dark3.1.m.2.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.m.2.conv1.conv.weight            | backbone.dark3.1.m.2.conv1.conv.weight
                                        | (64, 64, 1, 1)                 |
    | backbone.dark3.1.m.2.conv2.bn.*                   | backbone.dark3.1.m.2.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (64,) () (64,) (64,) (64,)     |
    | backbone.dark3.1.m.2.conv2.conv.weight            | backbone.dark3.1.m.2.conv2.conv.weight
                                        | (64, 64, 3, 3)                 |
    | backbone.dark4.0.bn.*                             | backbone.dark4.0.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (256,) () (256,) (256,) (256,) |
    | backbone.dark4.0.conv.weight                      | backbone.dark4.0.conv.weight
                                        | (256, 128, 3, 3)               |
    | backbone.dark4.1.conv1.bn.*                       | backbone.dark4.1.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.conv1.conv.weight                | backbone.dark4.1.conv1.conv.weight
                                        | (128, 256, 1, 1)               |
    | backbone.dark4.1.conv2.bn.*                       | backbone.dark4.1.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.conv2.conv.weight                | backbone.dark4.1.conv2.conv.weight
                                        | (128, 256, 1, 1)               |
    | backbone.dark4.1.conv3.bn.*                       | backbone.dark4.1.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (256,) () (256,) (256,) (256,) |
    | backbone.dark4.1.conv3.conv.weight                | backbone.dark4.1.conv3.conv.weight
                                        | (256, 256, 1, 1)               |
    | backbone.dark4.1.m.0.conv1.bn.*                   | backbone.dark4.1.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.m.0.conv1.conv.weight            | backbone.dark4.1.m.0.conv1.conv.weight
                                        | (128, 128, 1, 1)               |
    | backbone.dark4.1.m.0.conv2.bn.*                   | backbone.dark4.1.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.m.0.conv2.conv.weight            | backbone.dark4.1.m.0.conv2.conv.weight
                                        | (128, 128, 3, 3)               |
    | backbone.dark4.1.m.1.conv1.bn.*                   | backbone.dark4.1.m.1.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.m.1.conv1.conv.weight            | backbone.dark4.1.m.1.conv1.conv.weight
                                        | (128, 128, 1, 1)               |
    | backbone.dark4.1.m.1.conv2.bn.*                   | backbone.dark4.1.m.1.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.m.1.conv2.conv.weight            | backbone.dark4.1.m.1.conv2.conv.weight
                                        | (128, 128, 3, 3)               |
    | backbone.dark4.1.m.2.conv1.bn.*                   | backbone.dark4.1.m.2.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.m.2.conv1.conv.weight            | backbone.dark4.1.m.2.conv1.conv.weight
                                        | (128, 128, 1, 1)               |
    | backbone.dark4.1.m.2.conv2.bn.*                   | backbone.dark4.1.m.2.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (128,) () (128,) (128,) (128,) |
    | backbone.dark4.1.m.2.conv2.conv.weight            | backbone.dark4.1.m.2.conv2.conv.weight
                                        | (128, 128, 3, 3)               |
    | backbone.dark5.0.bn.*                             | backbone.dark5.0.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (512,) () (512,) (512,) (512,) |
    | backbone.dark5.0.conv.weight                      | backbone.dark5.0.conv.weight
                                        | (512, 256, 3, 3)               |
    | backbone.dark5.1.conv1.bn.*                       | backbone.dark5.1.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (256,) () (256,) (256,) (256,) |
    | backbone.dark5.1.conv1.conv.weight                | backbone.dark5.1.conv1.conv.weight
                                        | (256, 512, 1, 1)               |
    | backbone.dark5.1.conv2.bn.*                       | backbone.dark5.1.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (512,) () (512,) (512,) (512,) |
    | backbone.dark5.1.conv2.conv.weight                | backbone.dark5.1.conv2.conv.weight
                                        | (512, 1024, 1, 1)              |
    | backbone.dark5.2.conv1.bn.*                       | backbone.dark5.2.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (256,) () (256,) (256,) (256,) |
    | backbone.dark5.2.conv1.conv.weight                | backbone.dark5.2.conv1.conv.weight
                                        | (256, 512, 1, 1)               |
    | backbone.dark5.2.conv2.bn.*                       | backbone.dark5.2.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (256,) () (256,) (256,) (256,) |
    | backbone.dark5.2.conv2.conv.weight                | backbone.dark5.2.conv2.conv.weight
                                        | (256, 512, 1, 1)               |
    | backbone.dark5.2.conv3.bn.*                       | backbone.dark5.2.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                  | (512,) () (512,) (512,) (512,) |
    | backbone.dark5.2.conv3.conv.weight                | backbone.dark5.2.conv3.conv.weight
                                        | (512, 512, 1, 1)               |
    | backbone.dark5.2.m.0.conv1.bn.*                   | backbone.dark5.2.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (256,) () (256,) (256,) (256,) |
    | backbone.dark5.2.m.0.conv1.conv.weight            | backbone.dark5.2.m.0.conv1.conv.weight
                                        | (256, 256, 1, 1)               |
    | backbone.dark5.2.m.0.conv2.bn.*                   | backbone.dark5.2.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}              | (256,) () (256,) (256,) (256,) |
    | backbone.dark5.2.m.0.conv2.conv.weight            | backbone.dark5.2.m.0.conv2.conv.weight
                                        | (256, 256, 3, 3)               |
    | backbone.stem.conv.bn.*                           | backbone.stem.conv.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                      | (32,) () (32,) (32,) (32,)     |
    | backbone.stem.conv.conv.weight                    | backbone.stem.conv.conv.weight
                                        | (32, 12, 3, 3)                 |
    | m.0.*                                             | m.0.{bias,weight}
                                        | (255,) (255,512,1,1)           |
    | m.1.*                                             | m.1.{bias,weight}
                                        | (255,) (255,256,1,1)           |
    | m.2.*                                             | m.2.{bias,weight}
                                        | (255,) (255,128,1,1)           |
    | neck.C3_n3.conv1.bn.*                             | neck.C3_n3.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (128,) () (128,) (128,) (128,) |
    | neck.C3_n3.conv1.conv.weight                      | neck.C3_n3.conv1.conv.weight
                                        | (128, 256, 1, 1)               |
    | neck.C3_n3.conv2.bn.*                             | neck.C3_n3.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (128,) () (128,) (128,) (128,) |
    | neck.C3_n3.conv2.conv.weight                      | neck.C3_n3.conv2.conv.weight
                                        | (128, 256, 1, 1)               |
    | neck.C3_n3.conv3.bn.*                             | neck.C3_n3.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (256,) () (256,) (256,) (256,) |
    | neck.C3_n3.conv3.conv.weight                      | neck.C3_n3.conv3.conv.weight
                                        | (256, 256, 1, 1)               |
    | neck.C3_n3.m.0.conv1.bn.*                         | neck.C3_n3.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (128,) () (128,) (128,) (128,) |
    | neck.C3_n3.m.0.conv1.conv.weight                  | neck.C3_n3.m.0.conv1.conv.weight
                                        | (128, 128, 1, 1)               |
    | neck.C3_n3.m.0.conv2.bn.*                         | neck.C3_n3.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (128,) () (128,) (128,) (128,) |
    | neck.C3_n3.m.0.conv2.conv.weight                  | neck.C3_n3.m.0.conv2.conv.weight
                                        | (128, 128, 3, 3)               |
    | neck.C3_n4.conv1.bn.*                             | neck.C3_n4.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (256,) () (256,) (256,) (256,) |
    | neck.C3_n4.conv1.conv.weight                      | neck.C3_n4.conv1.conv.weight
                                        | (256, 512, 1, 1)               |
    | neck.C3_n4.conv2.bn.*                             | neck.C3_n4.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (256,) () (256,) (256,) (256,) |
    | neck.C3_n4.conv2.conv.weight                      | neck.C3_n4.conv2.conv.weight
                                        | (256, 512, 1, 1)               |
    | neck.C3_n4.conv3.bn.*                             | neck.C3_n4.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (512,) () (512,) (512,) (512,) |
    | neck.C3_n4.conv3.conv.weight                      | neck.C3_n4.conv3.conv.weight
                                        | (512, 512, 1, 1)               |
    | neck.C3_n4.m.0.conv1.bn.*                         | neck.C3_n4.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (256,) () (256,) (256,) (256,) |
    | neck.C3_n4.m.0.conv1.conv.weight                  | neck.C3_n4.m.0.conv1.conv.weight
                                        | (256, 256, 1, 1)               |
    | neck.C3_n4.m.0.conv2.bn.*                         | neck.C3_n4.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (256,) () (256,) (256,) (256,) |
    | neck.C3_n4.m.0.conv2.conv.weight                  | neck.C3_n4.m.0.conv2.conv.weight
                                        | (256, 256, 3, 3)               |
    | neck.C3_p3.conv1.bn.*                             | neck.C3_p3.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (64,) () (64,) (64,) (64,)     |
    | neck.C3_p3.conv1.conv.weight                      | neck.C3_p3.conv1.conv.weight
                                        | (64, 256, 1, 1)                |
    | neck.C3_p3.conv2.bn.*                             | neck.C3_p3.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (64,) () (64,) (64,) (64,)     |
    | neck.C3_p3.conv2.conv.weight                      | neck.C3_p3.conv2.conv.weight
                                        | (64, 256, 1, 1)                |
    | neck.C3_p3.conv3.bn.*                             | neck.C3_p3.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (128,) () (128,) (128,) (128,) |
    | neck.C3_p3.conv3.conv.weight                      | neck.C3_p3.conv3.conv.weight
                                        | (128, 128, 1, 1)               |
    | neck.C3_p3.m.0.conv1.bn.*                         | neck.C3_p3.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (64,) () (64,) (64,) (64,)     |
    | neck.C3_p3.m.0.conv1.conv.weight                  | neck.C3_p3.m.0.conv1.conv.weight
                                        | (64, 64, 1, 1)                 |
    | neck.C3_p3.m.0.conv2.bn.*                         | neck.C3_p3.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (64,) () (64,) (64,) (64,)     |
    | neck.C3_p3.m.0.conv2.conv.weight                  | neck.C3_p3.m.0.conv2.conv.weight
                                        | (64, 64, 3, 3)                 |
    | neck.C3_p4.conv1.bn.*                             | neck.C3_p4.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (128,) () (128,) (128,) (128,) |
    | neck.C3_p4.conv1.conv.weight                      | neck.C3_p4.conv1.conv.weight
                                        | (128, 512, 1, 1)               |
    | neck.C3_p4.conv2.bn.*                             | neck.C3_p4.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (128,) () (128,) (128,) (128,) |
    | neck.C3_p4.conv2.conv.weight                      | neck.C3_p4.conv2.conv.weight
                                        | (128, 512, 1, 1)               |
    | neck.C3_p4.conv3.bn.*                             | neck.C3_p4.conv3.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                        | (256,) () (256,) (256,) (256,) |
    | neck.C3_p4.conv3.conv.weight                      | neck.C3_p4.conv3.conv.weight
                                        | (256, 256, 1, 1)               |
    | neck.C3_p4.m.0.conv1.bn.*                         | neck.C3_p4.m.0.conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (128,) () (128,) (128,) (128,) |
    | neck.C3_p4.m.0.conv1.conv.weight                  | neck.C3_p4.m.0.conv1.conv.weight
                                        | (128, 128, 1, 1)               |
    | neck.C3_p4.m.0.conv2.bn.*                         | neck.C3_p4.m.0.conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                    | (128,) () (128,) (128,) (128,) |
    | neck.C3_p4.m.0.conv2.conv.weight                  | neck.C3_p4.m.0.conv2.conv.weight
                                        | (128, 128, 3, 3)               |
    | neck.bu_conv1.bn.*                                | neck.bu_conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                           | (256,) () (256,) (256,) (256,) |
    | neck.bu_conv1.conv.weight                         | neck.bu_conv1.conv.weight
                                        | (256, 256, 3, 3)               |
    | neck.bu_conv2.bn.*                                | neck.bu_conv2.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                           | (128,) () (128,) (128,) (128,) |
    | neck.bu_conv2.conv.weight                         | neck.bu_conv2.conv.weight
                                        | (128, 128, 3, 3)               |
    | neck.lateral_conv0.bn.*                           | neck.lateral_conv0.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                      | (256,) () (256,) (256,) (256,) |
    | neck.lateral_conv0.conv.weight                    | neck.lateral_conv0.conv.weight
                                        | (256, 512, 1, 1)               |
    | neck.reduce_conv1.bn.*                            | neck.reduce_conv1.bn.{bias,num_batches_tracked,running_mean,running_var,weight}                       | (128,) () (128,) (128,) (128,) |
    | neck.reduce_conv1.conv.weight                     | neck.reduce_conv1.conv.weight
                                        | (128, 256, 1, 1)               |
    | orien_head.neck_orien.0.conv_block.0.weight       | orien_head.neck_orien.0.conv_block.0.weight
                                        | (128, 256, 1, 1)               |
    | orien_head.neck_orien.0.conv_block.1.*            | orien_head.neck_orien.0.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}       | (128,) () (128,) (128,) (128,) |
    | orien_head.neck_orien.1.conv_block.0.weight       | orien_head.neck_orien.1.conv_block.0.weight
                                        | (256, 128, 3, 3)               |
    | orien_head.neck_orien.1.conv_block.1.*            | orien_head.neck_orien.1.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}       | (256,) () (256,) (256,) (256,) |
    | orien_head.neck_orien.2.conv_block.0.weight       | orien_head.neck_orien.2.conv_block.0.weight
                                        | (128, 256, 1, 1)               |
    | orien_head.neck_orien.2.conv_block.1.*            | orien_head.neck_orien.2.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}       | (128,) () (128,) (128,) (128,) |
    | orien_head.neck_orien.3.conv_block.0.weight       | orien_head.neck_orien.3.conv_block.0.weight
                                        | (256, 128, 3, 3)               |
    | orien_head.neck_orien.3.conv_block.1.*            | orien_head.neck_orien.3.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}       | (256,) () (256,) (256,) (256,) |
    | orien_head.neck_orien.4.conv_block.0.weight       | orien_head.neck_orien.4.conv_block.0.weight
                                        | (128, 256, 1, 1)               |
    | orien_head.neck_orien.4.conv_block.1.*            | orien_head.neck_orien.4.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}       | (128,) () (128,) (128,) (128,) |
    | orien_head.orien_m.0.conv_block.0.weight          | orien_head.orien_m.0.conv_block.0.weight
                                        | (256, 128, 3, 3)               |
    | orien_head.orien_m.0.conv_block.1.*               | orien_head.orien_m.0.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}          | (256,) () (256,) (256,) (256,) |
    | orien_head.orien_m.1.conv_block.0.weight          | orien_head.orien_m.1.conv_block.0.weight
                                        | (128, 256, 1, 1)               |
    | orien_head.orien_m.1.conv_block.1.*               | orien_head.orien_m.1.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}          | (128,) () (128,) (128,) (128,) |
    | orien_head.orien_m.2.conv_block.0.weight          | orien_head.orien_m.2.conv_block.0.weight
                                        | (256, 128, 3, 3)               |
    | orien_head.orien_m.2.conv_block.1.*               | orien_head.orien_m.2.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}          | (256,) () (256,) (256,) (256,) |
    | orien_head.orien_m.3.conv_block.0.weight          | orien_head.orien_m.3.conv_block.0.weight
                                        | (128, 256, 1, 1)               |
    | orien_head.orien_m.3.conv_block.1.*               | orien_head.orien_m.3.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}          | (128,) () (128,) (128,) (128,) |
    | orien_head.orien_m.4.conv_block.0.weight          | orien_head.orien_m.4.conv_block.0.weight
                                        | (256, 128, 3, 3)               |
    | orien_head.orien_m.4.conv_block.1.*               | orien_head.orien_m.4.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}          | (256,) () (256,) (256,) (256,) |
    | orien_head.orien_m.5.*                            | orien_head.orien_m.5.{bias,weight}
                                        | (18,) (18,256,1,1)             |
    | orien_head.up_levels_2to5.0.conv_block.0.weight   | orien_head.up_levels_2to5.0.conv_block.0.weight
                                        | (64, 64, 1, 1)                 |
    | orien_head.up_levels_2to5.0.conv_block.1.*        | orien_head.up_levels_2to5.0.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight}   | (64,) () (64,) (64,) (64,)     |
    | orien_head.up_levels_2to5.1.0.conv_block.0.weight | orien_head.up_levels_2to5.1.0.conv_block.0.weight
                                        | (64, 128, 1, 1)                |
    | orien_head.up_levels_2to5.1.0.conv_block.1.*      | orien_head.up_levels_2to5.1.0.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight} | (64,) () (64,) (64,) (64,)     |
    | orien_head.up_levels_2to5.2.0.conv_block.0.weight | orien_head.up_levels_2to5.2.0.conv_block.0.weight
                                        | (64, 256, 1, 1)                |
    | orien_head.up_levels_2to5.2.0.conv_block.1.*      | orien_head.up_levels_2to5.2.0.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight} | (64,) () (64,) (64,) (64,)     |
    | orien_head.up_levels_2to5.3.0.conv_block.0.weight | orien_head.up_levels_2to5.3.0.conv_block.0.weight
                                        | (64, 512, 1, 1)                |
    | orien_head.up_levels_2to5.3.0.conv_block.1.*      | orien_head.up_levels_2to5.3.0.conv_block.1.{bias,num_batches_tracked,running_mean,running_var,weight} | (64,) () (64,) (64,) (64,)     |
    416 416 600
    confidence thresh:  0.21
    image after transform:  (416, 416, 3)
    cost: 0.02672123908996582, fps: 37.42341425983922
    Traceback (most recent call last):
      File "demo.py", line 211, in <module>
        res = vis_res_fast(res, img, class_names, colors, conf_thresh)
      File "demo.py", line 146, in vis_res_fast
        img, clss, bit_masks, force_colors=None, draw_contours=False
      File "/home/ws/.local/lib/python3.6/site-packages/alfred/vis/image/mask.py", line 302, in vis_bitmasks_with_classes
        txt = f'{class_names[classes[i]]}'
    TypeError: list indices must be integers or slices, not numpy.float32
    

    I think this is happened because this line: scores = ins.scores.cpu().numpy() clss = ins.pred_classes.cpu().numpy()

    This cuse clss to by a numpy array and not a list of int. Am I right. How it's works for you?

    Also, in line (157): if bboxes: bboxes is not bool Maybe it's should by: if bboxes is not None:

    Thanks

    opened by sdimantsd 10
  • KeyError: 'dark3'

    KeyError: 'dark3'

    i do traning on cutstom dataset and i got this error

    `[07/11 22:24:10 detectron2]: Running with full config was omitted.
    [07/11 22:24:10 detectron2]: Full config saved to ./output/config.yaml
    [07/11 22:24:10 d2.utils.env]: Using a generated random seed 10422596
    [07/11 22:24:10 d2.engine.defaults]: Auto-scaling the config to batch_size=2, learning_rate=0.0025, max_iter=720000, warmup=8000.
    Traceback (most recent call last):
      File "train_ahmed.py", line 61, in <module>
        args=(args,),
      File "/content/detectron2/detectron2/engine/launch.py", line 82, in launch
        main_func(*args)
      File "train_ahmed.py", line 48, in main
        trainer = Trainer(cfg)
      File "/content/detectron2/detectron2/engine/defaults.py", line 376, in __init__
        model = self.build_model(cfg)
      File "/content/yolov7/train_det.py", line 37, in build_model
        model = build_model(cfg)
      File "/content/detectron2/detectron2/modeling/meta_arch/build.py", line 22, in build_model
        model = META_ARCH_REGISTRY.get(meta_arch)(cfg)
      File "/content/yolov7/yolov7/modeling/meta_arch/yolov7.py", line 95, in __init__
        backbone_shape = [backbone_shape[i].channels for i in self.in_features]
      File "/content/yolov7/yolov7/modeling/meta_arch/yolov7.py", line 95, in <listcomp>
        backbone_shape = [backbone_shape[i].channels for i in self.in_features]
    KeyError: 'dark3'`
    
    opened by totoadel 9
  • Inference on SparseInst onnxruntime gives bad output data

    Inference on SparseInst onnxruntime gives bad output data

    Hello!

    Has anyone successfully exported the sparseinst model to onnx and run inference on it using onnxruntime? I'm struggling getting reasonable output when running inference in C++ onnxruntime. I'm running the same frame through the network pre-export and the output looks well. But post-export the model outputs scores size (1, 50) with all zeros but a few scores ~0.04. For comparison the model pre-export output two person labels and one sports ball all over 80% confidence.

    I exported the model using the provided export_onnx.py as per readme instruction. The onnx-model looks, from what I can tell, fine when looking at it Netron.

    Just wondering if anyone has successfully used the exported model and could share some examples? =)

    opened by mattiasbax 9
  • SparseInst Model exportation to ONNX

    SparseInst Model exportation to ONNX

    Thank you so much for your work @jinfagang. I have tested yolov7 and I realized that SparseInst models cannot be converted to ONNX. Is the export onnx code compatible with exporting SparseInst models?

    opened by ayoolaolafenwa 9
  • SparseInst expecting pred_boxes

    SparseInst expecting pred_boxes

    While trying to train a SparseInst on a custom dataset for instance segmentation, I came across the following error:

    AttributeError: Cannot find field 'pred_boxes' in the given Instances!

    The error is thrown from within the detectron2library. When I print one Instance, there are indeed no pred_boxesin there, only pred_masks. But as far as I understand, SparseInst does not output bounding boxes, it predicts directly seg. masks, as they highlight in their paper: https://arxiv.org/abs/2203.12827 Then I am wondering why the code is looking for predicted boxes in this case... o_0

    opened by TeodorChiaburu 8
  • SparseInst export to ONNX input data format

    SparseInst export to ONNX input data format

    Hi, I tried exporting the weights of SparseInst (GIAM) that are in this repository to ONNX format using export.py with the following command (I assume I need to use this command? The documentation reads 'use export_onnx.py', but there's no export_onnx.py in the current branch of this repository).

    python export.py --config-file configs/coco/sparseinst/sparse_inst_r50_giam_aug.yaml --opts MODEL.WEIGHTS weights/base_giam.pth INPUT.MIN_SIZE_TEST 512

    This leads to the following issue:

      File ".../yolov7/yolov7/modeling/meta_arch/sparseinst.py", line 95, in <listcomp>
        images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3
    

    Any ideas on how to fix this? Thank you.

    opened by LukasMahieuArinti 6
  • sparseinst no results

    sparseinst no results

    I try the demo of sparseinst like this

    python demo.py --config-file configs/coco/sparseinst/sparse_inst_r50vd_giam_aug.yaml --video-input ~/Movies/Videos/86277963_nb2-1-80.flv -c 0.4 --opts MODEL.WEIGHTS weights/sparse_inst_r50vd_giam_aug_8bc5b3.pth
    

    but I get error

    Traceback (most recent call last):
      File "/home/dreamdeck/Documents/MJJ/code/Seg/yolov7-main/demo.py", line 230, in <module>
        res = vis_res_fast(res, frame, class_names, colors, conf_thresh)
      File "/home/dreamdeck/Documents/MJJ/code/Seg/yolov7-main/demo.py", line 155, in vis_res_fast
        img, clss, bit_masks, force_colors=None, draw_contours=False
      File "/home/dreamdeck/anaconda3/envs/detectron2/lib/python3.6/site-packages/alfred/vis/image/mask.py", line 285, in vis_bitmasks_with_classes
        thickness=-1, lineType=cv2.LINE_AA)
    cv2.error: OpenCV(4.5.5) /io/opencv/modules/imgproc/src/drawing.cpp:2599: error: (-215:Assertion failed) reader.ptr != NULL in function 'cvDrawContours'
    

    I debuged and found that there is no masks when run predictions = self.model([inputs]). All values in pred_masks are False.

    Do I need modify the config file or download other files?( I have dolwnloaded the sparse_inst_r50vd_giam_aug_8bc5b3.pth) What is the full steps to test demo?

    opened by MolianWH 6
  • SparseInst onnx2trt error

    SparseInst onnx2trt error

    @jinfagang ,hello, thank you for your work! when i convert SparseInst onnx model to tensorrt model using trtexec , the error is In node -1(importRange):UNSUPPORTED_NODE:Assertion failed:inputs.at(0).isInt32() && "For range operator with dynamic inputs,this version og TensorRT only supports INT32!

    tensorrt version:7.2.3 onnx version : v7 pytorch version:1.10

    opened by taxuezcy 6
  • Log train metrics into Weights & Biases for Object Detection task

    Log train metrics into Weights & Biases for Object Detection task

    This PR adds the ability to log training metrics for object detection into Weights & Biases. Here's an example dashboard of fine-tuning the faster_rcnn_R_101_FPN_3x detection model from detectron2 model zoo.

    opened by parambharat 5
  • 评估时报错cuda out of memory

    评估时报错cuda out of memory

    您好,很感谢您提供代码,我在使用代码时遇到两个问题,希望您能帮忙解答一下

    1. 当我使用yolomask_8gpu.yaml在coco上做evaluation时,到第27张图片进入model得到输出后显存会增加300多M,到第47张图片又会同样增加,然后就报错cuda out of memory。

    2. coco-instance中包含两个config,其中yolomask.yaml中IMS_PER_BATCH:3,但是默认的REFERENCE_WORLD_SIZE: 8,运行时会报错提示IMS_PER_BATCH是无效的,查看源码提示IMS_PER_BATCH%REFERENCE_WORLD_SIZE==0

    opened by erpingzi 5
  • unable to load DarkNet53 backbone for yolov7

    unable to load DarkNet53 backbone for yolov7

    Hi, @jinfagang

    According to darknet53.yaml, I downloaded yolov3.pt as the backbone for initializing the yolov7.

    However, I met the following problem, do you have any suggestions to provide accurate parameters for yolov7?

    RuntimeError: Error(s) in loading state_dict for Darknet: Missing key(s) in state_dict: "stem.conv.weight", "stem.bn.weight", "stem.bn.bias", "stem.bn.running_mean", "stem.bn.running_var", "dark1.0.conv.weight", "dark1.0.bn.weight", "dark1.0.bn.bias", "dark1.0.bn.running_mean", "dark1.0.bn.running_var", "dark1.1.layer1.conv.weight", "dark1.1.layer1.bn.weight", "dark1.1.layer1.bn.bias", "dark1.1.layer1.bn.running_mean", "dark1.1.layer1.bn.running_var", "dark1.1.layer2.conv.weight", "dark1.1.layer2.bn.weight", ....... .......

    opened by GrassBro 0
  • Backbone of yolo v7

    Backbone of yolo v7

    Hello, I appreciate your novel work. I have a question about your implementation. From official yolo v7 github, there are no public backbone network application around yolo v7 variants. Therefore, I can't find the certain name of yolov7 backbone or head network. Then, Could you give me an exact combination for each yolov7 variant from your implementation? For example, darknet53 + yolofpn + yoloxhead = yolo_v7_xxx.

    Best regards.

    opened by chey0313 1
  • ONNX export failed: Couldn't export Python operator _DeformConv

    ONNX export failed: Couldn't export Python operator _DeformConv

    Hi,

    • in docker container from pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel
    • torch 1.11.0
    • torchvision 0.12.0
    • latest detectron2 main branch
    • python3 export.py failed while exporting model sparse_inst_r50vd_dcn_giam_aug.yaml

    Error log:

    Traceback (most recent call last):
      File "export.py", line 285, in <module>
        torch.onnx.export(
      File "/opt/conda/lib/python3.8/site-packages/torch/onnx/__init__.py", line 305, in export
        return utils.export(model, args, f, export_params, verbose, training,
      File "/opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py", line 118, in export
        _export(model, args, f, export_params, verbose, training, input_names, output_names,
      File "/opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py", line 738, in _export
        proto, export_map, val_use_external_data_format = graph._export_onnx(
    RuntimeError: ONNX export failed: Couldn't export Python operator _DeformConv
    
    
    Defined at:
    /yolov7_d2/detectron2/detectron2/layers/deform_conv.py(394): forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1098): _slow_forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1110): _call_impl
    /yolov7_d2/yolov7/modeling/backbone/resnetvd.py(105): forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1098): _slow_forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1110): _call_impl
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py(141): forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1098): _slow_forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1110): _call_impl
    /yolov7_d2/yolov7/modeling/backbone/resnetvd.py(509): forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1098): _slow_forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1110): _call_impl
    /yolov7_d2/yolov7/modeling/meta_arch/sparseinst.py(147): forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1098): _slow_forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1110): _call_impl
    /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(118): wrapper
    /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(127): forward
    /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1110): _call_impl
    /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(1166): _get_trace_graph
    /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py(391): _trace_and_get_graph_from_model
    /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py(440): _create_jit_graph
    /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py(499): _model_to_graph
    /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py(719): _export
    /opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py(118): export
    /opt/conda/lib/python3.8/site-packages/torch/onnx/__init__.py(305): export
    export.py(285): <module>
    
    opened by XYudong 3
Releases(v1.0)
  • v1.0(Jun 20, 2022)

  • v.0.0.3(May 10, 2022)

    Since YOLOv7 not just yolo, it aims at a mature modeling tool based on detectron2, so I will continue add more models into it. It now also received a lot of PR from community. Let's build YOLOv7 togather and make a lot of experiment on it.

    Current is a Beta release, it becoming close to a Stable release version, we are keep on doing on that.

    • YOLOX-Pose model added, now you should able to training YOLOx with Pose head;
    • SOLOv2 training support, now you can using YOLOv7 training SOLOv2;
    • Sparseinst training support, now you can train sparseinst, we also support ONNX exportation;
    • Some bugs fixes in sparseinst;

    Feature you will see in the next release:

    • Int8 quantization on Sparseinst, CPU Realtime instance segmentation, just from yolov7!
    • TensorRT on SOLOv2 and Sparseinst;
    • Pretrain model of YOLOX-Pose.

    Just start and fork it! Join the community!

    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Mar 25, 2022)

    updates on:

    • remove dependencies of mobilecv, which is now optional;
    • remove nbnb deps now is optional;
    • add setup.py for install, usage can be outscope anywhere and any other project.
    Source code(tar.gz)
    Source code(zip)
Owner
JinTian
You know who I am.
JinTian
Source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree.

self-driving-car In this repository I will share the source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree. Hope this might

Andrea Palazzi 2.4k Dec 29, 2022
Research on controller area network Intrusion Detection Systems

Group members information Member 1: Lixue Liang Member 2: Yuet Lee Chan Member 3: Xinruo Zhang Member 4: Yifei Han User Manual Generate Attack Packets

Roche 4 Aug 30, 2022
MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity

MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,spherical coordinates, and intensity Introduction The 3D LiDAR place recognition aim

16 Dec 08, 2022
Defending graph neural networks against adversarial attacks (NeurIPS 2020)

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks Authors: Xiang Zhang ( Zitnik Lab @ Harvard 44 Dec 07, 2022

Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes

Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes The codes for simu

1 Jan 12, 2022
GAN encoders in PyTorch that could match PGGAN, StyleGAN v1/v2, and BigGAN. Code also integrates the implementation of these GANs.

MTV-TSA: Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions. This is the official code release fo

owl 37 Dec 24, 2022
Clustering with variational Bayes and population Monte Carlo

pypmc pypmc is a python package focusing on adaptive importance sampling. It can be used for integration and sampling from a user-defined target densi

45 Feb 06, 2022
Adversarially Learned Inference

Adversarially Learned Inference Code for the Adversarially Learned Inference paper. Compiling the paper locally From the repo's root directory, $ cd p

Mohamed Ishmael Belghazi 308 Sep 24, 2022
Finding Donors for CharityML

Finding-Donors-for-CharityML - Investigated factors that affect the likelihood of charity donations being made based on real census data.

Moamen Abdelkawy 1 Dec 30, 2021
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

43 Dec 14, 2022
ATAC: Adversarially Trained Actor Critic

ATAC: Adversarially Trained Actor Critic Adversarially Trained Actor Critic for Offline Reinforcement Learning by Ching-An Cheng*, Tengyang Xie*, Nan

Microsoft 41 Dec 08, 2022
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training pro

Yang Wenhan 44 Dec 06, 2022
Agent-based model simulator for air quality and pandemic risk assessment in architectural spaces

Agent-based model simulation for air quality and pandemic risk assessment in architectural spaces. User Guide archABM is a fast and open source agent-

Vicomtech 10 Dec 05, 2022
This is a classifier which basically predicts whether there is a gun law in a state or not, depending on various things like murder rates etc.

Gun-Laws-Classifier This is a classifier which basically predicts whether there is a gun law in a state or not, depending on various things like murde

Awais Saleem 1 Jan 20, 2022
Starter Code for VALUE benchmark

StarterCode for VALUE Benchmark This is the starter code for VALUE Benchmark [website], [paper]. This repository currently supports all baseline model

VALUE Benchmark 73 Dec 09, 2022
MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)

MetaShift: A Dataset of Datasets for Evaluating Distribution Shifts and Training Conflicts This repo provides the PyTorch source code of our paper: Me

88 Jan 04, 2023
A smart Chat bot that can help to know about corona virus and Make prediction of corona using X-ray.

TRINIT_Hum_kuchh_nahi_karenge_ML01 Document Link https://github.com/Jatin-Goyal-552/TRINIT_Hum_kuchh_nahi_karenge_ML01/blob/main/hum_kuchh_nahi_kareng

JatinGoyal 1 Feb 03, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
A python tutorial on bayesian modeling techniques (PyMC3)

Bayesian Modelling in Python Welcome to "Bayesian Modelling in Python" - a tutorial for those interested in learning how to apply bayesian modelling t

Mark Regan 2.4k Jan 06, 2023
Vit-ImageClassification - Pytorch ViT for Image classification on the CIFAR10 dataset

Vit-ImageClassification Introduction This project uses ViT to perform image clas

Kaicheng Yang 4 Jun 01, 2022