A tiny, friendly, strong baseline code for Person-reID (based on pytorch).

Overview

Pytorch ReID

Strong, Small, Friendly

Language grade: Python Build Status Total alerts License: MIT

A tiny, friendly, strong baseline code for Person-reID (based on pytorch).

Tutorial

Table of contents

Features

Now we have supported:

  • Circle Loss (CVPR 2020 Oral)
  • Float16 to save GPU memory based on apex
  • Part-based Convolutional Baseline(PCB)
  • Multiple Query Evaluation
  • Re-Ranking (GPU Version)
  • Random Erasing
  • ResNet/DenseNet
  • Visualize Training Curves
  • Visualize Ranking Result
  • Visualize Heatmap
  • Linear Warm-up

Here we provide hyperparameters and architectures, that were used to generate the result. Some of them (i.e. learning rate) are far from optimal. Do not hesitate to change them and see the effect.

P.S. With similar structure, we arrived [email protected]=87.74% mAP=69.46% with Matconvnet. (batchsize=8, dropout=0.75) You may refer to Here. Different framework need to be tuned in a different way.

Some News

23 Jun 2021 Attack your re-ID model via Query! They are not robust as you expected! Check the code at Here.

5 Feb 2021 We have supported Circle loss(CVPR20 Oral). You can try it by simply adding --circle.

11 January 2021 On the Market-1501 dataset, we accelerate the re-ranking processing from 89.2s to 9.4ms with one K40m GPU, facilitating the real-time post-processing. The pytorch implementation can be found in GPU-Re-Ranking.

11 June 2020 People live in the 3D world. We release one new person re-id code Person Re-identification in the 3D Space, which conduct representation learning in the 3D space. You are welcomed to check out it.

30 April 2020 We have applied this code to the AICity Challenge 2020, yielding the 1st Place Submission to the re-id track 🚗 . Check out here.

01 March 2020 We release one new image retrieval dataset, called University-1652, for drone-view target localization and drone navigation 🚁 . It has a similar setting with the person re-ID. You are welcomed to check out it.

07 July 2019: I added some new functions, such as --resume, auto-augmentation policy, acos loss, into developing thread and rewrite the save and load functions. I haven't tested the functions throughly. Some new functions are worthy of having a try. If you are first to this repo, I suggest you stay with the master thread.

01 July 2019: My CVPR19 Paper is online. It is based on this baseline repo as teacher model to provide pseudo label for the generated images to train a better student model. You are welcomed to check out the opensource code at here.

03 Jun 2019: Testing with multiple-scale inputs is added. You can use --ms 1,0.9 when extracting the feature. It could slightly improve the final result.

20 May 2019: Linear Warm Up is added. You also can set warm-up the first K epoch by --warm_epoch K. If K <=0, there will be no warm-up.

What's new: FP16 has been added. It can be used by simply added --fp16. You need to install apex and update your pytorch to 1.0.

Float16 could save about 50% GPU memory usage without accuracy drop. Our baseline could be trained with only 2GB GPU memory.

python train.py --fp16

What's new: Visualizing ranking result is added.

python prepare.py
python train.py
python test.py
python demo.py --query_index 777

What's new: Multiple-query Evaluation is added. The multiple-query result is about [email protected]=91.95% mAP=78.06%.

python prepare.py
python train.py
python test.py --multi
python evaluate_gpu.py

What's new:  PCB is added. You may use '--PCB' to use this model. It can achieve around [email protected]=92.73% mAP=78.16%. I used a GPU (P40) with 24GB Memory. You may try apply smaller batchsize and choose the smaller learning rate (for stability) to run. (For example, --batchsize 32 --lr 0.01 --PCB)

python train.py --PCB --batchsize 64 --name PCB-64
python test.py --PCB --name PCB-64

What's new: You may try evaluate_gpu.py to conduct a faster evaluation with GPU.

What's new: You may apply '--use_dense' to use DenseNet-121. It can arrive around [email protected]=89.91% mAP=73.58%.

What's new: Re-ranking is added to evaluation. The re-ranked result is about [email protected]=90.20% mAP=84.76%.

What's new: Random Erasing is added to train.

What's new: I add some code to generate training curves. The figure will be saved into the model folder when training.

Trained Model

I re-trained several models, and the results may be different with the original one. Just for a quick reference, you may directly use these models. The download link is Here.

Methods [email protected] mAP Reference
[ResNet-50] 88.84% 71.59% python train.py --train_all
[DenseNet-121] 90.17% 74.02% python train.py --name ft_net_dense --use_dense --train_all
[DenseNet-121 (Circle)] 91.00% 76.54% python train.py --name ft_net_dense_circle_w5 --circle --use_dense --train_all --warm_epoch 5
[PCB] 92.64% 77.47% python train.py --name PCB --PCB --train_all --lr 0.02
[ResNet-50 (fp16)] 88.03% 71.40% python train.py --name fp16 --fp16 --train_all
[ResNet-50 (all tricks)] 91.83% 78.32% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name warm5_s1_b8_lr2_p0.5
[ResNet-50 (all tricks+Circle)] 92.13% 79.84% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name warm5_s1_b8_lr2_p0.5_circle --circle

Model Structure

You may learn more from model.py. We add one linear layer(bottleneck), one batchnorm layer and relu.

Prerequisites

  • Python 3.6
  • GPU Memory >= 6G
  • Numpy
  • Pytorch 0.3+
  • [Optional] apex (for float16)
  • [Optional] pretrainedmodels

(Some reports found that updating numpy can arrive the right accuracy. If you only get 50~80 Top1 Accuracy, just try it.) We have successfully run the code based on numpy 1.12.1 and 1.13.1 .

Getting started

Installation

git clone https://github.com/pytorch/vision
cd vision
python setup.py install
  • [Optinal] You may skip it. Install apex from the source
git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install --cuda_ext --cpp_ext

Because pytorch and torchvision are ongoing projects.

Here we noted that our code is tested based on Pytorch 0.3.0/0.4.0/0.5.0/1.0.0 and Torchvision 0.2.0/0.2.1 .

Dataset & Preparation

Download Market1501 Dataset [Google] [Baidu]

Preparation: Put the images with the same id in one folder. You may use

python prepare.py

Remember to change the dataset path to your own path.

Futhermore, you also can test our code on [DukeMTMC-reID Dataset]( GoogleDriver or (BaiduYun password: bhbh)). Our baseline code is not such high on DukeMTMC-reID [email protected]=64.23%, mAP=43.92%. Hyperparameters are need to be tuned.

Train

Train a model by

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32  --data_dir your_data_path

--gpu_ids which gpu to run.

--name the name of model.

--data_dir the path of the training data.

--train_all using all images to train.

--batchsize batch size.

--erasing_p random erasing probability.

Train a model with random erasing by

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32  --data_dir your_data_path --erasing_p 0.5

Test

Use trained model to extract feature by

python test.py --gpu_ids 0 --name ft_ResNet50 --test_dir your_data_path  --batchsize 32 --which_epoch 59

--gpu_ids which gpu to run.

--batchsize batch size.

--name the dir name of trained model.

--which_epoch select the i-th model.

--data_dir the path of the testing data.

Evaluation

python evaluate.py

It will output [email protected], [email protected], [email protected] and mAP results. You may also try evaluate_gpu.py to conduct a faster evaluation with GPU.

For mAP calculation, you also can refer to the C++ code for Oxford Building. We use the triangle mAP calculation (consistent with the Market1501 original code).

re-ranking

python evaluate_rerank.py

It may take more than 10G Memory to run. So run it on a powerful machine if possible.

It will output [email protected], [email protected], [email protected] and mAP results.

Tips

Notes the format of the camera id and the number of cameras.

For some dataset, e.g., MSMT17, there are more than 10 cameras. You need to modify the prepare.py and test.py to read the double-digit camera ID.

For some vehicle re-ID datasets. e.g. VeRi, you also need to modify the prepare.py and test.py. It has different naming rules. https://github.com/layumi/Person_reID_baseline_pytorch/issues/107 (Sorry. It is in Chinese)

Citation

The following paper uses and reports the result of the baseline model. You may cite it in your paper.

@article{zheng2019joint,
  title={Joint discriminative and generative learning for person re-identification},
  author={Zheng, Zhedong and Yang, Xiaodong and Yu, Zhiding and Zheng, Liang and Yang, Yi and Kautz, Jan},
  journal={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}

The following papers may be the first two to use the bottleneck baseline. You may cite them in your paper.

@article{DBLP:journals/corr/SunZDW17,
  author    = {Yifan Sun and
               Liang Zheng and
               Weijian Deng and
               Shengjin Wang},
  title     = {SVDNet for Pedestrian Retrieval},
  booktitle   = {ICCV},
  year      = {2017},
}

@article{hermans2017defense,
  title={In Defense of the Triplet Loss for Person Re-Identification},
  author={Hermans, Alexander and Beyer, Lucas and Leibe, Bastian},
  journal={arXiv preprint arXiv:1703.07737},
  year={2017}
}

Basic Model

@article{zheng2018discriminatively,
  title={A discriminatively learned CNN embedding for person reidentification},
  author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},
  journal={ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)},
  volume={14},
  number={1},
  pages={13},
  year={2018},
  publisher={ACM}
}

@article{zheng2020vehiclenet,
  title={VehicleNet: Learning Robust Visual Representation for Vehicle Re-identification},
  author={Zheng, Zhedong and Ruan, Tao and Wei, Yunchao and Yang, Yi and Mei, Tao},
  journal={IEEE Transaction on Multimedia (TMM)},
  year={2020}
}

Related Repos

  1. Pedestrian Alignment Network GitHub stars
  2. 2stream Person re-ID GitHub stars
  3. Pedestrian GAN GitHub stars
  4. Language Person Search GitHub stars
  5. DG-Net GitHub stars
  6. 3D Person re-ID GitHub stars
Comments
  •  .jpg files not find

    .jpg files not find

    net output size: torch.Size([8, 751]) [Resize(size=(288, 144), interpolation=PIL.Image.BICUBIC), RandomCrop(size=(256, 128), padding=0), RandomHorizontalFlip(p=0.5), ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])] Traceback (most recent call last): File "train.py", line 114, in data_transforms['train']) File "/home/xiaowei/anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 178, in init target_transform=target_transform) File "/home/xiaowei/anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 79, in init "Supported extensions are: " + ",".join(extensions))) RuntimeError: Found 0 files in subfolders of: /home/xiaowei/Person_reID_baseline_pytorch/market/pytorch/train Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif

    I do have images in my directory, the python version is 3.6, and the pytorch is 0.3.0.What's causing this?

    opened by xiaoweihappy123 15
  • I followed all your instruction on train, test and evalutate, but only reached rank@1=~0.75, mAP=~.6 on market

    I followed all your instruction on train, test and evalutate, but only reached [email protected]=~0.75, mAP=~.6 on market

    Thanks for share this wonderful code! I download your code, only changed the data dir, and followed all your instruction and hyperparameters; also the same version of pytorch and torchvision, but I tried several times, only reached [email protected]=~0.75 and mAP=~0.6... What would you think the reason is? Thx again, good luck with your work!

    opened by ElijhaLi 15
  • NameError: name 'network_to_half' is not defined

    NameError: name 'network_to_half' is not defined

    when I test with python test.py --gpu_ids 0 --name ft_ResNet50 --test_dir Dataset/Market/pytorch --batchsize 32 --which_epoch 59 --PCB

    error: NameError: name 'network_to_half' is not defined

    how can I solve it?

    opened by xxx2974 14
  • 在VeRi数据集上的训练问题及测试结果。

    在VeRi数据集上的训练问题及测试结果。

    你好,我用VeRi-776数据集进行训练,其中576类作为训练集,200类作为测试集。 1、在model.py中,将class_num=576。 2、在test.py中,将这里model_structure = ft_net(200),设为测试集的200类,执行python test.py时会报错,说训练模型为576维的,而这里200维,不相符。 但是,将这里model_structure = ft_net(576),设为训练集的576类,执行python test.py不报错,但是rank1,rank5,rank10,map的结果全部是0。这是为什么呀? 3、请问,训练集与测试集的类别数目需要一样吗? 在原始code中 Market数据集,训练和测试都是751类。(请问,我使用VeRi-776,如何分配? 是训练集576,测试集200? 还是训练集388,测试集388呢?)

    opened by fukai001 13
  • Inceptionv3

    Inceptionv3

    hi @layumi , I try to test inceptionv3, I change the forward and model. It appears the following problems Traceback (most recent call last): File "inceptionv3_train.py", line 346, in <module> num_epochs=500) File ''inceptionv3_train.py", line 215, in train_model outputs = model(inputs) File "/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "inceptionv3_train.py", line 195, in forward x = self.classifier(x) File "/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "inceptionv3_train.py", line 55, in forward x = self.add_block(x) File "/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/lib/python3.5/site-packages/torch/nn/modules/container.py", line 67, in forward input = module(input) File "/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/lib/python3.5/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/lib/python3.5/site-packages/torch/nn/functional.py", line 837, in linear output = input.matmul(weight.t()) File "/lib/python3.5/site-packages/torch/autograd/variable.py", line 386, in matmul return torch.matmul(self, other) File "/lib/python3.5/site-packages/torch/functional.py", line 192, in matmul output = torch.mm(tensor1, tensor2) RuntimeError: size mismatch at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:247

    opened by buaaswf 13
  • a error in train.py

    a error in train.py

    python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32 error occurs as below:

    File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\envs\pytorch\lib\multiprocessing\spawn.py", line 144, in get_preparation_data _check_not_importing_main() File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\envs\pytorch\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:
    
            if __name__ == '__main__':
                freeze_support()
                ...
    
        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
    ForkingPickler(file, protocol).dump(obj)
    

    BrokenPipeError: [Errno 32] Broken pipe

    opened by hujiaodou96 11
  • A problem when calculating mAP

    A problem when calculating mAP

    https://github.com/layumi/Person_reID_baseline_pytorch/blob/a253943e16abd94277ee832a950b975df75b11e3/evaluate.py#L47-L54

    In the above code, I can't understand the way to calculate old_precision. Here old_precision should be the precison of the last true-match, should it be changed like this? old_precision = i*1.0/(rows_good[i-1] +1) or just

    ...
    ap = ap + d_recall*(old_precision + precision)/2 
    # Simplified from here `http://www.robots.ox.ac.uk/~vgg/data/oxbuildings/compute_ap.cpp`
    old_precision = precision
    
    opened by budui 10
  • AttributeError:nodule 'apex.amp' has no attribute 'initialize'

    AttributeError:nodule 'apex.amp' has no attribute 'initialize'

    作者您好!我已经安装了apex且可以使用下面的语句来使用float16进行网络计算 if fp16: model = network_to_half(model) optimizer_ft = FP16_Optimizer(optimizer_ft, static_loss_scale = 128.0) 但是使用model, optimizer_ft = amp.initialize(model, optimizer_ft, opt_level = "O1")却会报出标题中的错误,我已经导入了相关包,请问该如何解决?

    opened by Allen-lz 8
  • Goog Rank1 but bad MAP

    Goog Rank1 but bad MAP

    I run the project with my trained model whose parameters include --train_all --batchsize 32 no color jitter. when I evaluate the model,I got rank1 for 87% but map for 2.96%. My train loss image is listed. train

    opened by wang5566 7
  • About Circle Loss

    About Circle Loss

    hi,sir. When I use the Circle Loss to trian model: python train.py --name ft_resNet50_circle --circle ,I encountered the following problem:

    image chould you share me how to solve this problem?Thank you! All the best.

    opened by finger-monkey 6
  • when i use train.py ,it can't run normal!!!

    when i use train.py ,it can't run normal!!!

    [email protected]:~/soft_wty/Person_reID_baseline_pytorch-master$ python3 train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32 --data_dir /home/wty/soft_wty/dataset/market1501 --erasing_p 0.5 Downloading: "https://download.pytorch.org/models/densenet121-a639ec97.pth" to /home/wty/.torch/models/densenet121-a639ec97.pth Traceback (most recent call last): File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/home/wty/soft_wty/local_install/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/wty/soft_wty/local_install/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/wty/soft_wty/local_install/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/wty/soft_wty/local_install/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/home/wty/soft_wty/local_install/lib/python3.6/http/client.py", line 964, in send self.connect() File "/home/wty/soft_wty/local_install/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/home/wty/soft_wty/local_install/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/home/wty/soft_wty/local_install/lib/python3.6/ssl.py", line 814, in init self.do_handshake() File "/home/wty/soft_wty/local_install/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/home/wty/soft_wty/local_install/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833) During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "train.py", line 20, in from model import ft_net, ft_net_dense, PCB File "/home/software_mount/wty/Person_reID_baseline_pytorch-master/model.py", line 200, in net = ft_net_dense(751) File "/home/software_mount/wty/Person_reID_baseline_pytorch-master/model.py", line 83, in init model_ft = models.densenet121(pretrained=True) File "/home/wty/soft_wty/local_install/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/densenet.py", line 35, in densenet121 File "/home/wty/soft_wty/local_install/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 65, in load_url _download_url_to_file(url, cached_file, hash_prefix, progress=progress) File "/home/wty/soft_wty/local_install/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 70, in _download_url_to_file u = urlopen(url) File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/home/wty/soft_wty/local_install/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)>

    pip3 list show: cycler (0.10.0) kiwisolver (1.0.1) matplotlib (2.2.2) numpy (1.15.0) Pillow (5.2.0) pip (9.0.3) pyparsing (2.2.0) python-dateutil (2.7.3) pytz (2018.5) PyYAML (3.13) setuptools (39.0.1) six (1.11.0) torch (0.4.0) torchvision (0.2.1) tqdm (3.7.0)

    python3 -V show: Python 3.6.5

    help wanted 
    opened by 695874419 6
  • http://188.138.127.15:81/Datasets/Market-1501-v15.09.15.zip download link is broken

    http://188.138.127.15:81/Datasets/Market-1501-v15.09.15.zip download link is broken

    Describe the bug

    • trying to download the market-1501 dataset from the link mentioned here https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/tutorial but am not getting a response

    To Reproduce Steps to reproduce the behavior:

    wget http://188.138.127.15:81/Datasets/Market-1501-v15.09.15.zip
    

    just times out

    Expected behavior

    • download the zip
    opened by lehigh123 1
  • A question about Occluded-Duke dataset

    A question about Occluded-Duke dataset

    hello , layumi I want to train the occluded duke dataset , can i use prepare.py to prepare dataset directly?

    And i use prepare.py to prepare occluded-duke dataset , use swin transformer backbone , rank1 achieved 59%, do you have any suggestions to improve this result? thank you!!

    opened by ZzzybEric 1
  • Question about the evaluation_gpu.py

    Question about the evaluation_gpu.py

    Hello,

    First of all, thank you for your work. I'm working on two vehicle dataset using your code, but the images from test set does not have camera ID, so the program returned 0 for all results. To overcome this problem, should I randomly put camera ID for each image, or should I change the junk_index in the evaluation_gpu.py to empty array [ ] ?

    I'm looking forward to hearing your answer. Thank you and best regards.

    opened by viet2411 5
  • 这个模型怎么改成ddp训练的方式呢?

    这个模型怎么改成ddp训练的方式呢?

    这个模型怎么改成ddp训练的方式呢?这样可以将batchsize调高,会不会提高rank1呢?对于提高rank1,您有什么建议吗,我都试过了,PCB的rank1才达到93%多,map才84%,期待作者的回复~ 顺便说一下,因为test.py更改了,之前的模型的配置文件缺少信息,需要添加信息,否则无法运行,

    opened by Dhaizei 1
  • VIPeR数据集没有val文件

    VIPeR数据集没有val文件

    Traceback (most recent call last): File "train.py", line 168, in data_transforms['val']) File "/home/adminis/anaconda3/envs/python3.6/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 313, in init is_valid_file=is_valid_file) File "/home/adminis/anaconda3/envs/python3.6/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 145, in init classes, class_to_idx = self.find_classes(self.root) File "/home/adminis/anaconda3/envs/python3.6/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 221, in find_classes return find_classes(directory) File "/home/adminis/anaconda3/envs/python3.6/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 40, in find_classes classes = sorted(entry.name for entry in os.scandir(directory) if entry.is_dir()) FileNotFoundError: [Errno 2] No such file or directory: '../VIPeR/pytorch/val'

    opened by jxVoidreaver 1
Releases(v1.0.4)
  • v1.0.4(Jul 25, 2022)

    24 Jul 2022 Market-HQ is released with super-resolution quality from 12864 to 512256. Please check at https://github.com/layumi/HQ-Market

    14 Jul 2022 Add adversarial training by python train.py --name ftnet_adv --adv 0.1 --aiter 40.

    1 Feb 2022 Speed up the inference process about 10 seconds by removing the cat function in test.py.

    1 Feb 2022 Add the demo with TensorRT (The fast inference speed may depend on the GPU with the latest RT Core).

    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Dec 31, 2021)

  • v1.0.2(Dec 3, 2021)

    We add supports for four losses, including triplet loss, contrastive loss, sphere loss and lifted loss. The hyper-parameters are still tunning.

    What's Changed

    • Random grayscale erasure by @CynicalHeart in https://github.com/layumi/Person_reID_baseline_pytorch/pull/301

    New Contributors

    • @CynicalHeart made their first contribution in https://github.com/layumi/Person_reID_baseline_pytorch/pull/301

    Full Changelog: https://github.com/layumi/Person_reID_baseline_pytorch/compare/v1.0.1...v1.0.2

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Dec 1, 2021)

    Add supports for Swin Transformer / EfficientNet / HRNet

    What's Changed

    • Create LICENSE by @layumi in https://github.com/layumi/Person_reID_baseline_pytorch/pull/1
    • Create random_erasing.py by @zhunzhong07 in https://github.com/layumi/Person_reID_baseline_pytorch/pull/6
    • add random erasing by @zhunzhong07 in https://github.com/layumi/Person_reID_baseline_pytorch/pull/8
    • add random erasing by @zhunzhong07 in https://github.com/layumi/Person_reID_baseline_pytorch/pull/7
    • Update train.py by @zhangchuangnankai in https://github.com/layumi/Person_reID_baseline_pytorch/pull/63
    • The pytorch implementation of GNN-Re-Ranking. by @Xuanmeng-Zhang in https://github.com/layumi/Person_reID_baseline_pytorch/pull/250
    • Update the code for GNN-Re-Ranking. by @Xuanmeng-Zhang in https://github.com/layumi/Person_reID_baseline_pytorch/pull/251
    • Add the code for GPU-Re-Ranking by @Xuanmeng-Zhang in https://github.com/layumi/Person_reID_baseline_pytorch/pull/252
    • update filename by @Xuanmeng-Zhang in https://github.com/layumi/Person_reID_baseline_pytorch/pull/255
    • Change application by @ronghao233 in https://github.com/layumi/Person_reID_baseline_pytorch/pull/278

    New Contributors

    • @layumi made their first contribution in https://github.com/layumi/Person_reID_baseline_pytorch/pull/1
    • @zhunzhong07 made their first contribution in https://github.com/layumi/Person_reID_baseline_pytorch/pull/6
    • @zhangchuangnankai made their first contribution in https://github.com/layumi/Person_reID_baseline_pytorch/pull/63
    • @Xuanmeng-Zhang made their first contribution in https://github.com/layumi/Person_reID_baseline_pytorch/pull/250
    • @ronghao233 made their first contribution in https://github.com/layumi/Person_reID_baseline_pytorch/pull/278

    Full Changelog: https://github.com/layumi/Person_reID_baseline_pytorch/commits/v1.0.1

    Source code(tar.gz)
    Source code(zip)
Owner
Zhedong Zheng
Hi, I am a PhD student at University of Technology Sydney. My work focuses on computer vision, especially representation learning.
Zhedong Zheng
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)

dualFace dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ) We provide python implementations for our CVM 2021 paper "dualFac

Haoran XIE 46 Nov 10, 2022
FinEAS: Financial Embedding Analysis of Sentiment 📈

FinEAS: Financial Embedding Analysis of Sentiment 📈 (SentenceBERT for Financial News Sentiment Regression) This repository contains the code for gene

LHF Labs 31 Dec 13, 2022
CLNTM - Contrastive Learning for Neural Topic Model

Contrastive Learning for Neural Topic Model This repository contains the impleme

Thong Thanh Nguyen 25 Nov 24, 2022
[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022
Official codes: Self-Supervised Learning by Estimating Twin Class Distribution

TWIST: Self-Supervised Learning by Estimating Twin Class Distributions Codes and pretrained models for TWIST: @article{wang2021self, title={Self-Sup

Bytedance Inc. 85 Dec 15, 2022
Data for "Driving the Herd: Search Engines as Content Influencers" paper

herding_data Data for "Driving the Herd: Search Engines as Content Influencers" paper Dataset description The collection contains 2250 documents, 30 i

0 Aug 17, 2021
Using Hotel Data to predict High Value And Potential VIP Guests

Description Using hotel data and AI to predict high value guests and potential VIP guests. Hotel can leverage on prediction resutls to run more effect

HCG 12 Feb 14, 2022
An offline deep reinforcement learning library

d3rlpy: An offline deep reinforcement learning library d3rlpy is an offline deep reinforcement learning library for practitioners and researchers. imp

Takuma Seno 817 Jan 02, 2023
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 03, 2022
Stacs-ci - A set of modules to enable integration of STACS with commonly used CI / CD systems

Static Token And Credential Scanner CI Integrations What is it? STACS is a YARA

STACS 18 Aug 04, 2022
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
A Large-Scale Dataset for Spinal Vertebrae Segmentation in Computed Tomography

A Large-Scale Dataset for Spinal Vertebrae Segmentation in Computed Tomography

ICT.MIRACLE lab 75 Dec 26, 2022
A simple baseline for 3d human pose estimation in tensorflow. Presented at ICCV 17.

3d-pose-baseline This is the code for the paper Julieta Martinez, Rayat Hossain, Javier Romero, James J. Little. A simple yet effective baseline for 3

Julieta Martinez 1.3k Jan 03, 2023
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

NCVX NCVX: A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning. Please check https://ncvx.org for detailed instruction

SUN Group @ UMN 28 Aug 03, 2022
Official implementation of the paper "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"

Light Field Networks Project Page | Paper | Data | Pretrained Models Vincent Sitzmann*, Semon Rezchikov*, William Freeman, Joshua Tenenbaum, Frédo Dur

Vincent Sitzmann 130 Dec 29, 2022
Visualizing Yolov5's layers using GradCam

YOLO-V5 GRADCAM I constantly desired to know to which part of an object the object-detection models pay more attention. So I searched for it, but I di

Pooya Mohammadi Kazaj 200 Jan 01, 2023
FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS

FaceAPI AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using

Vladimir Mandic 395 Dec 29, 2022
Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Python implementation of 3D facial mesh exaggeration using the techniques described in the paper: Computational Caricaturization of Surfaces.

Wonjong Jang 8 Nov 01, 2022
Solver for Large-Scale Rank-One Semidefinite Relaxations

STRIDE: spectrahedral proximal gradient descent along vertices A Solver for Large-Scale Rank-One Semidefinite Relaxations About STRIDE is designed for

48 Dec 20, 2022