Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

Overview

Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers

PWC PWC


Results

results on COCO val

Backbone Method Lr Schd PQ Config Download
R-50 Panoptic-SegFormer 1x 48.0 config model
R-50 Panoptic-SegFormer 2x 49.6 config model
R-101 Panoptic-SegFormer 2x 50.6 config model
PVTv2-B5 (much lighter) Panoptic-SegFormer 2x 55.6 config model
Swin-L (window size 7) Panoptic-SegFormer 2x 55.8 config model

Install

Prerequisites

  • Linux
  • Python 3.6+
  • PyTorch 1.5+
  • torchvision
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv-full==1.3.4
  • mmdet==2.12.0 # higher version may not work
  • timm==0.4.5
  • einops==0.3.0
  • Pillow==8.0.1
  • opencv-python==4.5.2

note: PyTorch1.8 has a bug in its adamw.py and it is solved in PyTorch1.9(see), you can easily solve it by comparing the difference.

install Panoptic SegFormer

python setup.py install 

Datasets

When I began this project, mmdet dose not support panoptic segmentation officially. I convert the dataset from panoptic segmentation format to instance segmentation format for convenience.

1. prepare data (COCO)

cd Panoptic-SegFormer
mkdir datasets
cd datasets
ln -s path_to_coco coco
mkdir annotations/
cd annotations
wget http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
unzip panoptic_annotations_trainval2017.zip

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

2. convert panoptic format to detection format

cd Panoptic-SegFormer
./tools/convert_panoptic_coco.sh coco

Then the directory structure should be the following:

Panoptic-SegFormer
├── datasets
│   ├── annotations/
│   │   ├── panoptic_train2017/
│   │   ├── panoptic_train2017_detection_format.json
│   │   ├── panoptic_train2017.json
│   │   ├── panoptic_val2017/
│   │   ├── panoptic_val2017_detection_format.json
│   │   └── panoptic_val2017.json
│   └── coco/ 
│
├── config
├── checkpoints
├── easymd
...

Run (panoptic segmentation)

train

single-machine with 8 gpus.

./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 8

test

./tools/dist_test.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py path/to/model.pth 8

Citing

If you use Panoptic SegFormer in your research, please use the following BibTeX entry.

@article{li2021panoptic,
  title={Panoptic SegFormer},
  author={Li, Zhiqi and Wang, Wenhai and Xie, Enze and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Lu, Tong and Luo, Ping},
  journal={arXiv},
  year={2021}
}

Acknowledgement

Mainly based on Defromable DETR from MMdet.

Thanks very much for other open source works: timm, Panoptic FCN, MaskFomer, QueryInst

Comments
  • How demo one picture result ?

    How demo one picture result ?

    Dear friend, Thanks you for your good job. Now we do not want to download coco datasets, just want to give one picture, segment it and show its result. How to do it ? Best regards,

    opened by delldu 3
  • what's pvt_v2_ap in code?

    what's pvt_v2_ap in code?

    I found there are many names that obscure to understand. For example: pvt_v2_ap what that stands for? and what's single_stage_w_mask stands for?

    image

    and those file differences?

    opened by jinfagang 2
  • how to visualize demo image?

    how to visualize demo image?

    Dear friend, how to visualize the segmentation result of custom images? I run the infererce.py and didn’t get a good result. Like this: 000000

    I think there are some faults in my code.

    Here is my code:

    from mmcv.runner import checkpoint
    from mmdet.apis.inference import init_detector,LoadImage, inference_detector
    import easymd
    import cv2
    import random
    import colorsys
    import numpy as np
    
    def random_colors(N, bright=True):
        brightness = 1.0 if bright else 0.7
        hsv = [(i / float(N), 1, brightness) for i in range(N)]
        colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))
        random.shuffle(colors)
        return colors
    
    def apply_mask(image, mask, color, alpha=0.5):
        for c in range(3):
            image[:, :, c] = np.where(mask == 0,
                                      image[:, :, c],
                                      image[:, :, c] *
                                      (1 - alpha) + alpha * color[c] * 255)
        return image
    
    config = './configs/panformer/panformer_pvtb5_24e_coco_panoptic.py'
    #checkpoints = './checkpoints/pseg_r101_r50_latest.pth'
    checkpoints = "./checkpoints/panoptic_segformer_pvtv2b5_2x.pth"
    img_path = "img_path "
    mask_save_path = "save_path"
    
    colors = random_colors(80)
    
    model = init_detector(config,checkpoint=checkpoints)
    
    results = inference_detector(model, [img_path])
    
    img = cv2.imread(img_path)
    
    seg = results['segm'][0]
    N = len(seg)
    
    masked_image = img.copy()
    for i in range(N):
        color = colors[i]
        masks = np.sum(seg[i], axis=0)
        masked_image = apply_mask(masked_image, masks, color)
        # for mask in seg[i]:
        #     masked_image = apply_mask(masked_image, mask, color)
    
    # cv2.imshow("a", masked_image)
    
    opened by garriton 0
  • Location Decoder loss

    Location Decoder loss

    https://github.com/zhiqi-li/Panoptic-SegFormer/blob/e604ef810eaf5101106d221db4b6970c2daca5c9/easymd/models/panformer/panformer_head.py#L360-L364

    Why does the location decoder only compute the losses of the first L-1 layers not the whole L layers?

    opened by hust-nj 0
  • Instruction for single GPU run

    Instruction for single GPU run

    Hi thanks for sharing your works. Iwas trying to run it on single gpu. Would you pls add some instructions or scripts to run it in single gpu? That would be a great help.

    kind regards Abdullah

    opened by nazib 1
  • Impossible to debug, single_gpu code paths are broken

    Impossible to debug, single_gpu code paths are broken

    It seems that the multi gpu training and eval works great, however, while trying to debug you're opt for using a single gpu.
    In that case the code breaks in several parts during the evaluation of the validation set.
    Any chance for a hotfix? :)

    To reproduce, try to run the code from PyCharm in debug mode while there's only one GPU available.

    opened by aviadmx 1
  • Why instance annotations are required along panoptic ones?

    Why instance annotations are required along panoptic ones?

    The model solves the panoptic segmentation task, why does the validation dataset uses the instance segmentation annotations?

    data = dict(
        samples_per_gpu=2,
        workers_per_gpu=2,
        train=dict(
            type=dataset_type,
            ann_file= './datasets/annotations/panoptic_train2017_detection_format.json',
            img_prefix=data_root + 'train2017/',
            pipeline=train_pipeline),
        val=dict( 
          
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline),
        test=dict(
            segmentations_folder='./seg',
            gt_json = './datasets/annotations/panoptic_val2017.json',
            gt_folder = './datasets/annotations/panoptic_val2017',
            type=dataset_type,
            #ann_file= './datasets/coco/annotations/image_info_test-dev2017.json',
            ann_file=data_root + 'annotations/instances_val2017.json', # Why?
            #img_prefix=data_root + '/test2017/',
            img_prefix=data_root + 'val2017/',
            pipeline=test_pipeline)
            )
    

    We eventually use the instances_val2017.json file instead of panoptic_val2017.json

    opened by aviadmx 3
  • Loading checkpoint

    Loading checkpoint

    When loading the Swin-L checkpoint by adding a load_from line to the config configs/panformer/panformer_swinl_24e_coco_panoptic.pyz as following:

    load_from='./pretrained/panoptic_segformer_swinl_2x.pth'
    

    The loading fails with an error about keys mismatch:

    unexpected key in source state_dict: bbox_head.cls_branches2.0.weight, bbox_head.cls_branches2.0.bias, bbox_head.cls_branches2.1.weight, bbox_head.cls_branches2.1.bias, bbox_head.cls_branches2.2.weight, bbox_head.cls_branches2.2.bias, bbox_head.cls_branches2.3.weight, bbox_head.cls_branches2.3.bias, bbox_head.mask_head.blocks.0.head_norm1.weight, bbox_head.mask_head.blocks.0.head_norm1.bias, bbox_head.mask_head.blocks.0.attn.q.weight, bbox_head.mask_head.blocks.0.attn.q.bias, bbox_head.mask_head.blocks.0.attn.k.weight, bbox_head.mask_head.blocks.0.attn.k.bias, bbox_head.mask_head.blocks.0.attn.v.weight, bbox_head.mask_head.blocks.0.attn.v.bias, bbox_head.mask_head.blocks.0.attn.proj.weight, bbox_head.mask_head.blocks.0.attn.proj.bias, bbox_head.mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.0.attn.linear.0.weight, bbox_head.mask_head.blocks.0.attn.linear.0.bias, bbox_head.mask_head.blocks.0.head_norm2.weight, bbox_head.mask_head.blocks.0.head_norm2.bias, bbox_head.mask_head.blocks.0.mlp.fc1.weight, bbox_head.mask_head.blocks.0.mlp.fc1.bias, bbox_head.mask_head.blocks.0.mlp.fc2.weight, bbox_head.mask_head.blocks.0.mlp.fc2.bias, bbox_head.mask_head.blocks.1.head_norm1.weight, bbox_head.mask_head.blocks.1.head_norm1.bias, bbox_head.mask_head.blocks.1.attn.q.weight, bbox_head.mask_head.blocks.1.attn.q.bias, bbox_head.mask_head.blocks.1.attn.k.weight, bbox_head.mask_head.blocks.1.attn.k.bias, bbox_head.mask_head.blocks.1.attn.v.weight, bbox_head.mask_head.blocks.1.attn.v.bias, bbox_head.mask_head.blocks.1.attn.proj.weight, bbox_head.mask_head.blocks.1.attn.proj.bias, bbox_head.mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.1.attn.linear.0.weight, bbox_head.mask_head.blocks.1.attn.linear.0.bias, bbox_head.mask_head.blocks.1.head_norm2.weight, bbox_head.mask_head.blocks.1.head_norm2.bias, bbox_head.mask_head.blocks.1.mlp.fc1.weight, bbox_head.mask_head.blocks.1.mlp.fc1.bias, bbox_head.mask_head.blocks.1.mlp.fc2.weight, bbox_head.mask_head.blocks.1.mlp.fc2.bias, bbox_head.mask_head.blocks.2.head_norm1.weight, bbox_head.mask_head.blocks.2.head_norm1.bias, bbox_head.mask_head.blocks.2.attn.q.weight, bbox_head.mask_head.blocks.2.attn.q.bias, bbox_head.mask_head.blocks.2.attn.k.weight, bbox_head.mask_head.blocks.2.attn.k.bias, bbox_head.mask_head.blocks.2.attn.v.weight, bbox_head.mask_head.blocks.2.attn.v.bias, bbox_head.mask_head.blocks.2.attn.proj.weight, bbox_head.mask_head.blocks.2.attn.proj.bias, bbox_head.mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.2.attn.linear.0.weight, bbox_head.mask_head.blocks.2.attn.linear.0.bias, bbox_head.mask_head.blocks.2.head_norm2.weight, bbox_head.mask_head.blocks.2.head_norm2.bias, bbox_head.mask_head.blocks.2.mlp.fc1.weight, bbox_head.mask_head.blocks.2.mlp.fc1.bias, bbox_head.mask_head.blocks.2.mlp.fc2.weight, bbox_head.mask_head.blocks.2.mlp.fc2.bias, bbox_head.mask_head.blocks.3.head_norm1.weight, bbox_head.mask_head.blocks.3.head_norm1.bias, bbox_head.mask_head.blocks.3.attn.q.weight, bbox_head.mask_head.blocks.3.attn.q.bias, bbox_head.mask_head.blocks.3.attn.k.weight, bbox_head.mask_head.blocks.3.attn.k.bias, bbox_head.mask_head.blocks.3.attn.v.weight, bbox_head.mask_head.blocks.3.attn.v.bias, bbox_head.mask_head.blocks.3.attn.proj.weight, bbox_head.mask_head.blocks.3.attn.proj.bias, bbox_head.mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head.blocks.3.attn.linear.0.weight, bbox_head.mask_head.blocks.3.attn.linear.0.bias, bbox_head.mask_head.blocks.3.head_norm2.weight, bbox_head.mask_head.blocks.3.head_norm2.bias, bbox_head.mask_head.blocks.3.mlp.fc1.weight, bbox_head.mask_head.blocks.3.mlp.fc1.bias, bbox_head.mask_head.blocks.3.mlp.fc2.weight, bbox_head.mask_head.blocks.3.mlp.fc2.bias, bbox_head.mask_head.attnen.q.weight, bbox_head.mask_head.attnen.q.bias, bbox_head.mask_head.attnen.k.weight, bbox_head.mask_head.attnen.k.bias, bbox_head.mask_head.attnen.linear_l1.0.weight, bbox_head.mask_head.attnen.linear_l1.0.bias, bbox_head.mask_head.attnen.linear_l2.0.weight, bbox_head.mask_head.attnen.linear_l2.0.bias, bbox_head.mask_head.attnen.linear_l3.0.weight, bbox_head.mask_head.attnen.linear_l3.0.bias, bbox_head.mask_head.attnen.linear.0.weight, bbox_head.mask_head.attnen.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm1.weight, bbox_head.mask_head2.blocks.0.head_norm1.bias, bbox_head.mask_head2.blocks.0.attn.q.weight, bbox_head.mask_head2.blocks.0.attn.q.bias, bbox_head.mask_head2.blocks.0.attn.k.weight, bbox_head.mask_head2.blocks.0.attn.k.bias, bbox_head.mask_head2.blocks.0.attn.v.weight, bbox_head.mask_head2.blocks.0.attn.v.bias, bbox_head.mask_head2.blocks.0.attn.proj.weight, bbox_head.mask_head2.blocks.0.attn.proj.bias, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.0.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.0.attn.linear.0.weight, bbox_head.mask_head2.blocks.0.attn.linear.0.bias, bbox_head.mask_head2.blocks.0.head_norm2.weight, bbox_head.mask_head2.blocks.0.head_norm2.bias, bbox_head.mask_head2.blocks.0.mlp.fc1.weight, bbox_head.mask_head2.blocks.0.mlp.fc1.bias, bbox_head.mask_head2.blocks.0.mlp.fc2.weight, bbox_head.mask_head2.blocks.0.mlp.fc2.bias, bbox_head.mask_head2.blocks.0.self_attention.qkv.weight, bbox_head.mask_head2.blocks.0.self_attention.qkv.bias, bbox_head.mask_head2.blocks.0.self_attention.proj.weight, bbox_head.mask_head2.blocks.0.self_attention.proj.bias, bbox_head.mask_head2.blocks.0.norm3.weight, bbox_head.mask_head2.blocks.0.norm3.bias, bbox_head.mask_head2.blocks.1.head_norm1.weight, bbox_head.mask_head2.blocks.1.head_norm1.bias, bbox_head.mask_head2.blocks.1.attn.q.weight, bbox_head.mask_head2.blocks.1.attn.q.bias, bbox_head.mask_head2.blocks.1.attn.k.weight, bbox_head.mask_head2.blocks.1.attn.k.bias, bbox_head.mask_head2.blocks.1.attn.v.weight, bbox_head.mask_head2.blocks.1.attn.v.bias, bbox_head.mask_head2.blocks.1.attn.proj.weight, bbox_head.mask_head2.blocks.1.attn.proj.bias, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.1.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.1.attn.linear.0.weight, bbox_head.mask_head2.blocks.1.attn.linear.0.bias, bbox_head.mask_head2.blocks.1.head_norm2.weight, bbox_head.mask_head2.blocks.1.head_norm2.bias, bbox_head.mask_head2.blocks.1.mlp.fc1.weight, bbox_head.mask_head2.blocks.1.mlp.fc1.bias, bbox_head.mask_head2.blocks.1.mlp.fc2.weight, bbox_head.mask_head2.blocks.1.mlp.fc2.bias, bbox_head.mask_head2.blocks.1.self_attention.qkv.weight, bbox_head.mask_head2.blocks.1.self_attention.qkv.bias, bbox_head.mask_head2.blocks.1.self_attention.proj.weight, bbox_head.mask_head2.blocks.1.self_attention.proj.bias, bbox_head.mask_head2.blocks.1.norm3.weight, bbox_head.mask_head2.blocks.1.norm3.bias, bbox_head.mask_head2.blocks.2.head_norm1.weight, bbox_head.mask_head2.blocks.2.head_norm1.bias, bbox_head.mask_head2.blocks.2.attn.q.weight, bbox_head.mask_head2.blocks.2.attn.q.bias, bbox_head.mask_head2.blocks.2.attn.k.weight, bbox_head.mask_head2.blocks.2.attn.k.bias, bbox_head.mask_head2.blocks.2.attn.v.weight, bbox_head.mask_head2.blocks.2.attn.v.bias, bbox_head.mask_head2.blocks.2.attn.proj.weight, bbox_head.mask_head2.blocks.2.attn.proj.bias, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.2.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.2.attn.linear.0.weight, bbox_head.mask_head2.blocks.2.attn.linear.0.bias, bbox_head.mask_head2.blocks.2.head_norm2.weight, bbox_head.mask_head2.blocks.2.head_norm2.bias, bbox_head.mask_head2.blocks.2.mlp.fc1.weight, bbox_head.mask_head2.blocks.2.mlp.fc1.bias, bbox_head.mask_head2.blocks.2.mlp.fc2.weight, bbox_head.mask_head2.blocks.2.mlp.fc2.bias, bbox_head.mask_head2.blocks.2.self_attention.qkv.weight, bbox_head.mask_head2.blocks.2.self_attention.qkv.bias, bbox_head.mask_head2.blocks.2.self_attention.proj.weight, bbox_head.mask_head2.blocks.2.self_attention.proj.bias, bbox_head.mask_head2.blocks.2.norm3.weight, bbox_head.mask_head2.blocks.2.norm3.bias, bbox_head.mask_head2.blocks.3.head_norm1.weight, bbox_head.mask_head2.blocks.3.head_norm1.bias, bbox_head.mask_head2.blocks.3.attn.q.weight, bbox_head.mask_head2.blocks.3.attn.q.bias, bbox_head.mask_head2.blocks.3.attn.k.weight, bbox_head.mask_head2.blocks.3.attn.k.bias, bbox_head.mask_head2.blocks.3.attn.v.weight, bbox_head.mask_head2.blocks.3.attn.v.bias, bbox_head.mask_head2.blocks.3.attn.proj.weight, bbox_head.mask_head2.blocks.3.attn.proj.bias, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.3.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.3.attn.linear.0.weight, bbox_head.mask_head2.blocks.3.attn.linear.0.bias, bbox_head.mask_head2.blocks.3.head_norm2.weight, bbox_head.mask_head2.blocks.3.head_norm2.bias, bbox_head.mask_head2.blocks.3.mlp.fc1.weight, bbox_head.mask_head2.blocks.3.mlp.fc1.bias, bbox_head.mask_head2.blocks.3.mlp.fc2.weight, bbox_head.mask_head2.blocks.3.mlp.fc2.bias, bbox_head.mask_head2.blocks.3.self_attention.qkv.weight, bbox_head.mask_head2.blocks.3.self_attention.qkv.bias, bbox_head.mask_head2.blocks.3.self_attention.proj.weight, bbox_head.mask_head2.blocks.3.self_attention.proj.bias, bbox_head.mask_head2.blocks.3.norm3.weight, bbox_head.mask_head2.blocks.3.norm3.bias, bbox_head.mask_head2.blocks.4.head_norm1.weight, bbox_head.mask_head2.blocks.4.head_norm1.bias, bbox_head.mask_head2.blocks.4.attn.q.weight, bbox_head.mask_head2.blocks.4.attn.q.bias, bbox_head.mask_head2.blocks.4.attn.k.weight, bbox_head.mask_head2.blocks.4.attn.k.bias, bbox_head.mask_head2.blocks.4.attn.v.weight, bbox_head.mask_head2.blocks.4.attn.v.bias, bbox_head.mask_head2.blocks.4.attn.proj.weight, bbox_head.mask_head2.blocks.4.attn.proj.bias, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.4.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.4.attn.linear.0.weight, bbox_head.mask_head2.blocks.4.attn.linear.0.bias, bbox_head.mask_head2.blocks.4.head_norm2.weight, bbox_head.mask_head2.blocks.4.head_norm2.bias, bbox_head.mask_head2.blocks.4.mlp.fc1.weight, bbox_head.mask_head2.blocks.4.mlp.fc1.bias, bbox_head.mask_head2.blocks.4.mlp.fc2.weight, bbox_head.mask_head2.blocks.4.mlp.fc2.bias, bbox_head.mask_head2.blocks.4.self_attention.qkv.weight, bbox_head.mask_head2.blocks.4.self_attention.qkv.bias, bbox_head.mask_head2.blocks.4.self_attention.proj.weight, bbox_head.mask_head2.blocks.4.self_attention.proj.bias, bbox_head.mask_head2.blocks.4.norm3.weight, bbox_head.mask_head2.blocks.4.norm3.bias, bbox_head.mask_head2.blocks.5.head_norm1.weight, bbox_head.mask_head2.blocks.5.head_norm1.bias, bbox_head.mask_head2.blocks.5.attn.q.weight, bbox_head.mask_head2.blocks.5.attn.q.bias, bbox_head.mask_head2.blocks.5.attn.k.weight, bbox_head.mask_head2.blocks.5.attn.k.bias, bbox_head.mask_head2.blocks.5.attn.v.weight, bbox_head.mask_head2.blocks.5.attn.v.bias, bbox_head.mask_head2.blocks.5.attn.proj.weight, bbox_head.mask_head2.blocks.5.attn.proj.bias, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l1.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l2.0.bias, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.weight, bbox_head.mask_head2.blocks.5.attn.linear_l3.0.bias, bbox_head.mask_head2.blocks.5.attn.linear.0.weight, bbox_head.mask_head2.blocks.5.attn.linear.0.bias, bbox_head.mask_head2.blocks.5.head_norm2.weight, bbox_head.mask_head2.blocks.5.head_norm2.bias, bbox_head.mask_head2.blocks.5.mlp.fc1.weight, bbox_head.mask_head2.blocks.5.mlp.fc1.bias, bbox_head.mask_head2.blocks.5.mlp.fc2.weight, bbox_head.mask_head2.blocks.5.mlp.fc2.bias, bbox_head.mask_head2.blocks.5.self_attention.qkv.weight, bbox_head.mask_head2.blocks.5.self_attention.qkv.bias, bbox_head.mask_head2.blocks.5.self_attention.proj.weight, bbox_head.mask_head2.blocks.5.self_attention.proj.bias, bbox_head.mask_head2.blocks.5.norm3.weight, bbox_head.mask_head2.blocks.5.norm3.bias, bbox_head.mask_head2.attnen.q.weight, bbox_head.mask_head2.attnen.q.bias, bbox_head.mask_head2.attnen.k.weight, bbox_head.mask_head2.attnen.k.bias, bbox_head.mask_head2.attnen.linear_l1.0.weight, bbox_head.mask_head2.attnen.linear_l1.0.bias, bbox_head.mask_head2.attnen.linear_l2.0.weight, bbox_head.mask_head2.attnen.linear_l2.0.bias, bbox_head.mask_head2.attnen.linear_l3.0.weight, bbox_head.mask_head2.attnen.linear_l3.0.bias, bbox_head.mask_head2.attnen.linear.0.weight, bbox_head.mask_head2.attnen.linear.0.bias
    
    missing keys in source state_dict: bbox_head.cls_thing_branches.0.weight, bbox_head.cls_thing_branches.0.bias, bbox_head.cls_thing_branches.1.weight, bbox_head.cls_thing_branches.1.bias, bbox_head.cls_thing_branches.2.weight, bbox_head.cls_thing_branches.2.bias, bbox_head.cls_thing_branches.3.weight, bbox_head.cls_thing_branches.3.bias, bbox_head.things_mask_head.blocks.0.head_norm1.weight, bbox_head.things_mask_head.blocks.0.head_norm1.bias, bbox_head.things_mask_head.blocks.0.attn.q.weight, bbox_head.things_mask_head.blocks.0.attn.q.bias, bbox_head.things_mask_head.blocks.0.attn.k.weight, bbox_head.things_mask_head.blocks.0.attn.k.bias, bbox_head.things_mask_head.blocks.0.attn.v.weight, bbox_head.things_mask_head.blocks.0.attn.v.bias, bbox_head.things_mask_head.blocks.0.attn.proj.weight, bbox_head.things_mask_head.blocks.0.attn.proj.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.0.attn.linear.0.weight, bbox_head.things_mask_head.blocks.0.attn.linear.0.bias, bbox_head.things_mask_head.blocks.0.head_norm2.weight, bbox_head.things_mask_head.blocks.0.head_norm2.bias, bbox_head.things_mask_head.blocks.0.mlp.fc1.weight, bbox_head.things_mask_head.blocks.0.mlp.fc1.bias, bbox_head.things_mask_head.blocks.0.mlp.fc2.weight, bbox_head.things_mask_head.blocks.0.mlp.fc2.bias, bbox_head.things_mask_head.blocks.1.head_norm1.weight, bbox_head.things_mask_head.blocks.1.head_norm1.bias, bbox_head.things_mask_head.blocks.1.attn.q.weight, bbox_head.things_mask_head.blocks.1.attn.q.bias, bbox_head.things_mask_head.blocks.1.attn.k.weight, bbox_head.things_mask_head.blocks.1.attn.k.bias, bbox_head.things_mask_head.blocks.1.attn.v.weight, bbox_head.things_mask_head.blocks.1.attn.v.bias, bbox_head.things_mask_head.blocks.1.attn.proj.weight, bbox_head.things_mask_head.blocks.1.attn.proj.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.1.attn.linear.0.weight, bbox_head.things_mask_head.blocks.1.attn.linear.0.bias, bbox_head.things_mask_head.blocks.1.head_norm2.weight, bbox_head.things_mask_head.blocks.1.head_norm2.bias, bbox_head.things_mask_head.blocks.1.mlp.fc1.weight, bbox_head.things_mask_head.blocks.1.mlp.fc1.bias, bbox_head.things_mask_head.blocks.1.mlp.fc2.weight, bbox_head.things_mask_head.blocks.1.mlp.fc2.bias, bbox_head.things_mask_head.blocks.2.head_norm1.weight, bbox_head.things_mask_head.blocks.2.head_norm1.bias, bbox_head.things_mask_head.blocks.2.attn.q.weight, bbox_head.things_mask_head.blocks.2.attn.q.bias, bbox_head.things_mask_head.blocks.2.attn.k.weight, bbox_head.things_mask_head.blocks.2.attn.k.bias, bbox_head.things_mask_head.blocks.2.attn.v.weight, bbox_head.things_mask_head.blocks.2.attn.v.bias, bbox_head.things_mask_head.blocks.2.attn.proj.weight, bbox_head.things_mask_head.blocks.2.attn.proj.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.2.attn.linear.0.weight, bbox_head.things_mask_head.blocks.2.attn.linear.0.bias, bbox_head.things_mask_head.blocks.2.head_norm2.weight, bbox_head.things_mask_head.blocks.2.head_norm2.bias, bbox_head.things_mask_head.blocks.2.mlp.fc1.weight, bbox_head.things_mask_head.blocks.2.mlp.fc1.bias, bbox_head.things_mask_head.blocks.2.mlp.fc2.weight, bbox_head.things_mask_head.blocks.2.mlp.fc2.bias, bbox_head.things_mask_head.blocks.3.head_norm1.weight, bbox_head.things_mask_head.blocks.3.head_norm1.bias, bbox_head.things_mask_head.blocks.3.attn.q.weight, bbox_head.things_mask_head.blocks.3.attn.q.bias, bbox_head.things_mask_head.blocks.3.attn.k.weight, bbox_head.things_mask_head.blocks.3.attn.k.bias, bbox_head.things_mask_head.blocks.3.attn.v.weight, bbox_head.things_mask_head.blocks.3.attn.v.bias, bbox_head.things_mask_head.blocks.3.attn.proj.weight, bbox_head.things_mask_head.blocks.3.attn.proj.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.things_mask_head.blocks.3.attn.linear.0.weight, bbox_head.things_mask_head.blocks.3.attn.linear.0.bias, bbox_head.things_mask_head.blocks.3.head_norm2.weight, bbox_head.things_mask_head.blocks.3.head_norm2.bias, bbox_head.things_mask_head.blocks.3.mlp.fc1.weight, bbox_head.things_mask_head.blocks.3.mlp.fc1.bias, bbox_head.things_mask_head.blocks.3.mlp.fc2.weight, bbox_head.things_mask_head.blocks.3.mlp.fc2.bias, bbox_head.things_mask_head.attnen.q.weight, bbox_head.things_mask_head.attnen.q.bias, bbox_head.things_mask_head.attnen.k.weight, bbox_head.things_mask_head.attnen.k.bias, bbox_head.things_mask_head.attnen.linear_l1.0.weight, bbox_head.things_mask_head.attnen.linear_l1.0.bias, bbox_head.things_mask_head.attnen.linear_l2.0.weight, bbox_head.things_mask_head.attnen.linear_l2.0.bias, bbox_head.things_mask_head.attnen.linear_l3.0.weight, bbox_head.things_mask_head.attnen.linear_l3.0.bias, bbox_head.things_mask_head.attnen.linear.0.weight, bbox_head.things_mask_head.attnen.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm1.weight, bbox_head.stuff_mask_head.blocks.0.head_norm1.bias, bbox_head.stuff_mask_head.blocks.0.attn.q.weight, bbox_head.stuff_mask_head.blocks.0.attn.q.bias, bbox_head.stuff_mask_head.blocks.0.attn.k.weight, bbox_head.stuff_mask_head.blocks.0.attn.k.bias, bbox_head.stuff_mask_head.blocks.0.attn.v.weight, bbox_head.stuff_mask_head.blocks.0.attn.v.bias, bbox_head.stuff_mask_head.blocks.0.attn.proj.weight, bbox_head.stuff_mask_head.blocks.0.attn.proj.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.0.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.0.head_norm2.weight, bbox_head.stuff_mask_head.blocks.0.head_norm2.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.0.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.0.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.0.norm3.weight, bbox_head.stuff_mask_head.blocks.0.norm3.bias, bbox_head.stuff_mask_head.blocks.1.head_norm1.weight, bbox_head.stuff_mask_head.blocks.1.head_norm1.bias, bbox_head.stuff_mask_head.blocks.1.attn.q.weight, bbox_head.stuff_mask_head.blocks.1.attn.q.bias, bbox_head.stuff_mask_head.blocks.1.attn.k.weight, bbox_head.stuff_mask_head.blocks.1.attn.k.bias, bbox_head.stuff_mask_head.blocks.1.attn.v.weight, bbox_head.stuff_mask_head.blocks.1.attn.v.bias, bbox_head.stuff_mask_head.blocks.1.attn.proj.weight, bbox_head.stuff_mask_head.blocks.1.attn.proj.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.1.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.1.head_norm2.weight, bbox_head.stuff_mask_head.blocks.1.head_norm2.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.1.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.1.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.1.norm3.weight, bbox_head.stuff_mask_head.blocks.1.norm3.bias, bbox_head.stuff_mask_head.blocks.2.head_norm1.weight, bbox_head.stuff_mask_head.blocks.2.head_norm1.bias, bbox_head.stuff_mask_head.blocks.2.attn.q.weight, bbox_head.stuff_mask_head.blocks.2.attn.q.bias, bbox_head.stuff_mask_head.blocks.2.attn.k.weight, bbox_head.stuff_mask_head.blocks.2.attn.k.bias, bbox_head.stuff_mask_head.blocks.2.attn.v.weight, bbox_head.stuff_mask_head.blocks.2.attn.v.bias, bbox_head.stuff_mask_head.blocks.2.attn.proj.weight, bbox_head.stuff_mask_head.blocks.2.attn.proj.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.2.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.2.head_norm2.weight, bbox_head.stuff_mask_head.blocks.2.head_norm2.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.2.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.2.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.2.norm3.weight, bbox_head.stuff_mask_head.blocks.2.norm3.bias, bbox_head.stuff_mask_head.blocks.3.head_norm1.weight, bbox_head.stuff_mask_head.blocks.3.head_norm1.bias, bbox_head.stuff_mask_head.blocks.3.attn.q.weight, bbox_head.stuff_mask_head.blocks.3.attn.q.bias, bbox_head.stuff_mask_head.blocks.3.attn.k.weight, bbox_head.stuff_mask_head.blocks.3.attn.k.bias, bbox_head.stuff_mask_head.blocks.3.attn.v.weight, bbox_head.stuff_mask_head.blocks.3.attn.v.bias, bbox_head.stuff_mask_head.blocks.3.attn.proj.weight, bbox_head.stuff_mask_head.blocks.3.attn.proj.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.3.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.3.head_norm2.weight, bbox_head.stuff_mask_head.blocks.3.head_norm2.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.3.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.3.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.3.norm3.weight, bbox_head.stuff_mask_head.blocks.3.norm3.bias, bbox_head.stuff_mask_head.blocks.4.head_norm1.weight, bbox_head.stuff_mask_head.blocks.4.head_norm1.bias, bbox_head.stuff_mask_head.blocks.4.attn.q.weight, bbox_head.stuff_mask_head.blocks.4.attn.q.bias, bbox_head.stuff_mask_head.blocks.4.attn.k.weight, bbox_head.stuff_mask_head.blocks.4.attn.k.bias, bbox_head.stuff_mask_head.blocks.4.attn.v.weight, bbox_head.stuff_mask_head.blocks.4.attn.v.bias, bbox_head.stuff_mask_head.blocks.4.attn.proj.weight, bbox_head.stuff_mask_head.blocks.4.attn.proj.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.4.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.4.head_norm2.weight, bbox_head.stuff_mask_head.blocks.4.head_norm2.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.4.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.4.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.4.norm3.weight, bbox_head.stuff_mask_head.blocks.4.norm3.bias, bbox_head.stuff_mask_head.blocks.5.head_norm1.weight, bbox_head.stuff_mask_head.blocks.5.head_norm1.bias, bbox_head.stuff_mask_head.blocks.5.attn.q.weight, bbox_head.stuff_mask_head.blocks.5.attn.q.bias, bbox_head.stuff_mask_head.blocks.5.attn.k.weight, bbox_head.stuff_mask_head.blocks.5.attn.k.bias, bbox_head.stuff_mask_head.blocks.5.attn.v.weight, bbox_head.stuff_mask_head.blocks.5.attn.v.bias, bbox_head.stuff_mask_head.blocks.5.attn.proj.weight, bbox_head.stuff_mask_head.blocks.5.attn.proj.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l1.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l2.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear_l3.0.bias, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.weight, bbox_head.stuff_mask_head.blocks.5.attn.linear.0.bias, bbox_head.stuff_mask_head.blocks.5.head_norm2.weight, bbox_head.stuff_mask_head.blocks.5.head_norm2.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc1.bias, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.weight, bbox_head.stuff_mask_head.blocks.5.mlp.fc2.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.qkv.bias, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.weight, bbox_head.stuff_mask_head.blocks.5.self_attention.proj.bias, bbox_head.stuff_mask_head.blocks.5.norm3.weight, bbox_head.stuff_mask_head.blocks.5.norm3.bias, bbox_head.stuff_mask_head.attnen.q.weight, bbox_head.stuff_mask_head.attnen.q.bias, bbox_head.stuff_mask_head.attnen.k.weight, bbox_head.stuff_mask_head.attnen.k.bias, bbox_head.stuff_mask_head.attnen.linear_l1.0.weight, bbox_head.stuff_mask_head.attnen.linear_l1.0.bias, bbox_head.stuff_mask_head.attnen.linear_l2.0.weight, bbox_head.stuff_mask_head.attnen.linear_l2.0.bias, bbox_head.stuff_mask_head.attnen.linear_l3.0.weight, bbox_head.stuff_mask_head.attnen.linear_l3.0.bias, bbox_head.stuff_mask_head.attnen.linear.0.weight, bbox_head.stuff_mask_head.attnen.linear.0.bias
    
    opened by aviadmx 3
  • ImportError Libtorch_cpu.so: undefined symbol

    ImportError Libtorch_cpu.so: undefined symbol

    Thank you for this awesome work

    Unfortunately I can't run the training because I get the following error

    ./tools/dist_train.sh ./configs/panformer/panformer_r50_24e_coco_panoptic.py 1
    + CONFIG=./configs/panformer/panformer_r50_24e_coco_panoptic.py
    + GPUS=1
    + PORT=29503
    ++ dirname ./tools/dist_train.sh
    ++ dirname ./tools/dist_train.sh
    + PYTHONPATH=./tools/..:
    + python -m torch.distributed.launch --nproc_per_node=1 --master_port=29503 ./tools/train.py ./configs/panformer/panformer_r50_24e_coco_panoptic.py --launcher pytorch --deterministic
    Traceback (most recent call last):
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 183, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/runpy.py", line 109, in _get_module_details
        __import__(pkg_name)
      File "/home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/__init__.py", line 197, in <module>
        from torch._C import *  # noqa: F403
    ImportError: /home/vision/anaconda3/envs/psf/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: _ZNK3c1010TensorImpl23shallow_copy_and_detachERKNS_15VariableVersionEb
    

    This is my environment:

    screen screen1

    opened by EnnioEvo 0
Owner
Nanjing University, China.
ScaleNet: A Shallow Architecture for Scale Estimation

ScaleNet: A Shallow Architecture for Scale Estimation Repository for the code of ScaleNet paper: "ScaleNet: A Shallow Architecture for Scale Estimatio

Axel Barroso 34 Nov 09, 2022
Non-Official Pytorch implementation of "Face Identity Disentanglement via Latent Space Mapping" https://arxiv.org/abs/2005.07728 Using StyleGAN2 instead of StyleGAN

Face Identity Disentanglement via Latent Space Mapping - Implement in pytorch with StyleGAN 2 Description Pytorch implementation of the paper Face Ide

Daniel Roich 58 Dec 24, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

peng 64 Dec 12, 2022
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
TorchOk - The toolkit for fast Deep Learning experiments in Computer Vision

TorchOk - The toolkit for fast Deep Learning experiments in Computer Vision

52 Dec 23, 2022
Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement (NeurIPS 2020)

MTTS-CAN: Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement Paper Xin Liu, Josh Fromm, Shwetak Patel, Daniel M

Xin Liu 106 Dec 30, 2022
banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.

banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services. This library is developed by Bandit ML and ex-authors of Facebook's app

Bandit ML 51 Dec 22, 2022
[CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans

Scan2Cap: Context-aware Dense Captioning in RGB-D Scans Introduction We introduce the task of dense captioning in 3D scans from commodity RGB-D sensor

Dave Z. Chen 79 Nov 07, 2022
Contrastive Learning for Metagenomic Binning

CLMB A simple framework for CLMB - a novel deep Contrastive Learningfor Metagenomic Binning Created by Pengfei Zhang, senior of Department of Computer

1 Sep 14, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 07, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
PyTorch code for our paper "Gated Multiple Feedback Network for Image Super-Resolution" (BMVC2019)

Gated Multiple Feedback Network for Image Super-Resolution This repository contains the PyTorch implementation for the proposed GMFN [arXiv]. The fram

Qilei Li 66 Nov 03, 2022
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
Simulating an AI playing 2048 using the Expectimax algorithm

2048-expectimax Simulating an AI playing 2048 using the Expectimax algorithm The base game engine uses code from here. The AI player is modeled as a m

Subha Ramesh 2 Jan 31, 2022
code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022

Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022 News (03/16/2022) upload retrieval checkpoints finetuned on COCO and Flickr T

187 Jan 02, 2023
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023
Deep-Learning-Image-Captioning - Implementing convolutional and recurrent neural networks in Keras to generate sentence descriptions of images

Deep Learning - Image Captioning with Convolutional and Recurrent Neural Nets ========================================================================

23 Apr 06, 2022
Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.

deep_autoviml Build keras pipelines and models in a single line of code! Table of Contents Motivation How it works Technology Install Usage API Image

AutoViz and Auto_ViML 102 Dec 17, 2022