tf2-keras implement yolov5

Overview

YOLOv5 in tesnorflow2.x-keras

模型测试

  • 训练 COCO2017(val 5k)

  • 检测效果

  • 精度/召回率

Requirements

pip3 install -r requirements.txt

Get start

  1. 训练
python3 train.py
  1. tensorboard
tensorboard --host 0.0.0.0 --logdir ./logs/ --port 8053 --samples_per_plugin=images=40
  1. 查看
http://127.0.0.1:8053
  1. 测试, 修改detect.py里面input_imagemodel_path
python3 detect.py

训练自己的数据

  1. labelme打标自己的数据
  2. 打开data/labelme2coco.py脚本, 修改如下地方
input_dir = '这里写labelme打标时保存json标记文件的目录'
output_dir = '这里写要转CoCo格式的目录,建议建一个空目录'
labels = "这里是你打标时所有的类别名, txt文本即可, 每行一个类, 类名无需加引号"
  1. 执行data/labelme2coco.py脚本会在output_dir生成对应的json文件和图片
  2. 修改train.py文件中coco_annotation_file以及num_class, 注意classes通过CoCoDataGenrator(*).coco.cats[label_id]['name']可获得,由于coco中类别不连续,所以通过coco.cats拿到的数组下标拿到的类别可能不准.
  3. 开始训练, python3 train.py
Comments
  • 关于类别损失计算的问题

    关于类别损失计算的问题

    您好,loss这段不是很理解, https://github.com/yyccR/yolov5_in_tf2_keras/blob/3e6645cbf94d2a1e11c33663e80113daa4590321/loss.py#L142-L152 请问targets最后两位应该是置信度1和最佳的anchor索引吗? https://github.com/yyccR/yolov5_in_tf2_keras/blob/3e6645cbf94d2a1e11c33663e80113daa4590321/loss.py#L288-L293 那这边split出来的true_obj, true_cls应该就是对应的置信度1和最佳的anchor索引吧。 那这个类别损失 https://github.com/yyccR/yolov5_in_tf2_keras/blob/3e6645cbf94d2a1e11c33663e80113daa4590321/loss.py#L356 计算的不是最佳anchor索引吗,是跟obj_mask 有关系吗

    opened by whalefa1I 5
  • sparse_categorical_crossentropy训练时有nan结果

    sparse_categorical_crossentropy训练时有nan结果

    有的数据会在这行出现nan https://github.com/yyccR/yolov5_in_tf2_keras/blob/033a1156c1481f4258bf24a4a8215af39682da94/loss.py#L357 查看了input的is_nan,都正常。而且把sparse_categorical_crossentropy换成binary_crossentropy就好了。 请问这两者在这里计算有差别吗,是否可以进行替换

    opened by whalefa1I 3
  • lebelme2coco处理逻辑有误

    lebelme2coco处理逻辑有误

    我在实际使用您的代码训练自己的数据集时发现,labelme2coco.py 好像缺少对shape_type == "rectangle"时的处理,导致我最后生成的json文件annotations项为空。 以下是labelme2coco.py文件100行到124行代码: ` if shape_type == "polygon": mask = labelme.utils.shape_to_mask( img.shape[:2], points, shape_type ) # cv2.imshow("",np.array(mask, dtype=np.uint8)*255) # cv2.waitKey(0)

                if group_id is None:
                    group_id = uuid.uuid1()
    
                instance = (label, group_id)
                # print(instance)
    
                if instance in masks:
                    masks[instance] = masks[instance] | mask
                else:
                    masks[instance] = mask
                # print(masks[instance].shape)
    
                if shape_type == "rectangle":
                    (x1, y1), (x2, y2) = points
                    x1, x2 = sorted([x1, x2])
                    y1, y2 = sorted([y1, y2])
                    points = [x1, y1, x2, y1, x2, y2, x1, y2]
                if shape_type == "circle": 
                ....
    

    ` 代码永远不会执行到shape_type == "rectangle"或shape_type == "circle"。

    opened by aijialin 2
  • layers.py

    layers.py

    根據ultralytics/yolov5:

    https://github.com/ultralytics/yolov5/blob/63ddb6f0d06f6309aa42bababd08c859197a27af/models/common.py#L70-L73

    這一段程式:

    https://github.com/yyccR/yolov5_in_tf2_keras/blob/46298d7c98073750176d64896ee9dc01b55c5aca/layers.py#L127-L132

    是不是應該改寫成:

        def call(self, inputs, *args, **kwargs):
            y = self.multiheadAttention(self.q(inputs), self.v(inputs), self.k(inputs)) + inputs
            x = self.fc1(x)
            x = self.fc2(x)
            x = x +  y
            return x
    
    opened by AugustusHsu 1
  • What is the mAP on COCO17 val ?

    What is the mAP on COCO17 val ?

    Hi @yyccR, thanks for your repo. I want to know if you can reach the same mAP as in original YOLOV5 (Train on COCO17 train and test on COCO17 val)? And do you have plan to release some pretrained checkpoint ?

    opened by Tyler-D 1
Releases(v1.1)
  • v1.1(Jun 24, 2022)

    v1.1 几个总结:

    • [1]. 调整tf.keras.layers.BatchNormalization的__call__方法中training=True
    • [2]. 新增TFLite/onnx格式导出与验证,详见/data/h5_to_tflite.py, /data/h5_to_onnx.py
    • [3]. 修改backbone网络里batch_size,在训练和测试时需指定,避免tflite导出时FlexOps问题
    • [4]. YoloHead里对类别不再做softmax,直接sigmoid,支持多类别输出
    • [5]. release里的yolov5s-best.h5为kaggle猫狗脸数据集的重新训练权重,训练:测试为8:2,val精度大概如下:

    | class | [email protected] | [email protected]:0.95 | precision | recall | | :-: | :-: | :-: | :-: | :-: | | cat | 0.962680 | 0.672483 | 0.721003 | 0.958333 | | dog | 0.934285 | 0.546893 | 0.770701 | 0.923664 | | total | 0.948482 | 0.609688 | 0.745852 | 0.940999 |

    • [6]. release里的yolov5s-best.tflite为上述yolov5s-best.h5的tflite量化模型,建议用Netron软件打开查看输入输出
    • [7]. release里的yolov5s-best.onnx为上述yolov5s-best.h5的onnx模型,建议用Netron软件打开查看输入输出
    • [8]. android 模型测试效果如下:

    就这样,继续加油!💪🏻💪🏻💪🏻

    Source code(tar.gz)
    Source code(zip)
    yolov5s-best.h5(27.51 MB)
    yolov5s-best.onnx(27.25 MB)
    yolov5s-best.tflite(6.95 MB)
  • v1.0(Jun 21, 2022)

    v1.0 几个总结:

    • [1]. 模型结构总的与 ultralytics/yolov5 v6.0 保持一致
    • [2]. 其中Conv层替换swishRelu
    • [3]. 整体数据增强与 ultralytics/yolov5 保持一致
    • [4]. readme中训练所需的数据集为kaggle公开猫狗脸检测数据集,已放到release列表中
    • [5]. 为什么不训练coco数据集?因为没资源,跑一个coco要很久的,服务器一直都有任务在跑所以没空去跑 - . -
    • [6]. release里的yolov5s-best.h5为上述kaggle猫狗脸数据集的训练权重,训练:测试为8:2,val精度大概如下:

    | class | [email protected] | [email protected]:0.95 | precision | recall | | :-: | :-: | :-: | :-: | :-: | | cat | 0.905156 | 0.584378 | 0.682848 | 0.886555 | | dog | 0.940633 | 0.513005 | 0.724036 | 0.934866 | | total | 0.922895 | 0.548692 | 0.703442 | 0.910710 |

    就这样,继续加油!💪🏻💪🏻💪🏻

    Source code(tar.gz)
    Source code(zip)
    JPEGImages.zip(260.17 MB)
    yolov5s-best.h5(27.51 MB)
Owner
yangcheng
yangcheng
NAS-Bench-x11 and the Power of Learning Curves

NAS-Bench-x11 NAS-Bench-x11 and the Power of Learning Curves Shen Yan, Colin White, Yash Savani, Frank Hutter. NeurIPS 2021. Surrogate NAS benchmarks

AutoML-Freiburg-Hannover 13 Nov 18, 2022
Visyerres sgdf woob - Modules Woob pour l'intranet et autres sites Scouts et Guides de France

Vis'Yerres SGDF - Modules Woob Vous avez le sentiment que l'intranet des Scouts

Thomas Touhey (pas un pseudonyme) 3 Dec 24, 2022
NeRD: Neural Reflectance Decomposition from Image Collections

NeRD: Neural Reflectance Decomposition from Image Collections Project Page | Video | Paper | Dataset Implementation for NeRD. A novel method which dec

Computergraphics (University of Tübingen) 195 Dec 29, 2022
Scalable Graph Neural Networks for Heterogeneous Graphs

Neighbor Averaging over Relation Subgraphs (NARS) NARS is an algorithm for node classification on heterogeneous graphs, based on scalable neighbor ave

Facebook Research 67 Dec 03, 2022
Stroke-predictions-ml-model - Machine learning model to predict individuals chances of having a stroke

stroke-predictions-ml-model machine learning model to predict individuals chance

Alex Volchek 1 Jan 03, 2022
Meta-meta-learning with evolution and plasticity

Evolve plastic networks to be able to automatically acquire novel cognitive (meta-learning) tasks

5 Jun 28, 2022
Uni-Fold: Training your own deep protein-folding models

Uni-Fold: Training your own deep protein-folding models. This package provides an implementation of a trainable, Transformer-based deep protein foldin

DP Technology 187 Jan 04, 2023
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
Semantic Segmentation in Pytorch. Network include: FCN、FCN_ResNet、SegNet、UNet、BiSeNet、BiSeNetV2、PSPNet、DeepLabv3_plus、 HRNet、DDRNet

🚀 If it helps you, click a star! ⭐ Update log 2020.12.10 Project structure adjustment, the previous code has been deleted, the adjustment will be re-

Deeachain 269 Jan 04, 2023
Fuzzing the Kernel Using Unicornafl and AFL++

Unicorefuzz Fuzzing the Kernel using UnicornAFL and AFL++. For details, skim through the WOOT paper or watch this talk at CCCamp19. Is it any good? ye

Security in Telecommunications 283 Dec 26, 2022
Source Code for AAAI 2022 paper "Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching"

Graph Convolutional Networks with Dual Message Passing for Subgraph Isomorphism Counting and Matching This repository is an official implementation of

HKUST-KnowComp 13 Sep 08, 2022
Faster RCNN with PyTorch

Faster RCNN with PyTorch Note: I re-implemented faster rcnn in this project when I started learning PyTorch. Then I use PyTorch in all of my projects.

Long Chen 1.6k Dec 23, 2022
A collection of papers about Transformer in the field of medical image analysis.

A collection of papers about Transformer in the field of medical image analysis.

Junyu Chen 377 Jan 05, 2023
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
Code for the paper "Next Generation Reservoir Computing"

Next Generation Reservoir Computing This is the code for the results and figures in our paper "Next Generation Reservoir Computing". They are written

OSU QuantInfo Lab 105 Dec 20, 2022
MLSpace: Hassle-free machine learning & deep learning development

MLSpace: Hassle-free machine learning & deep learning development

abhishek thakur 293 Jan 03, 2023
Official implementations of PSENet, PAN and PAN++.

News (2021/11/03) Paddle implementation of PAN, see Paddle-PANet. Thanks @simplify23. (2021/04/08) PSENet and PAN are included in MMOCR. Introduction

395 Dec 14, 2022
Imaginaire - NVIDIA's Deep Imagination Team's PyTorch Library

Imaginaire Docs | License | Installation | Model Zoo Imaginaire is a pytorch library that contains optimized implementation of several image and video

NVIDIA Research Projects 3.6k Dec 29, 2022
FairyTailor: Multimodal Generative Framework for Storytelling

FairyTailor: Multimodal Generative Framework for Storytelling

Eden Bens 172 Dec 30, 2022
Code for NeurIPS 2021 paper "Curriculum Offline Imitation Learning"

README The code is based on the ILswiss. To run the code, use python run_experiment.py --nosrun -e your YAML file -g gpu id Generally, run_experim

ApexRL 12 Mar 19, 2022