Semantic Segmentation in Pytorch. Network include: FCN、FCN_ResNet、SegNet、UNet、BiSeNet、BiSeNetV2、PSPNet、DeepLabv3_plus、 HRNet、DDRNet

Overview

🚀 If it helps you, click a star!

Update log

  • 2020.12.10 Project structure adjustment, the previous code has been deleted, the adjustment will be re-uploaded code
  • 2021.04.09 Re-upload the code, "V1 Commit"
  • 2021.04.22 update torch distributed training
  • Ongoing update .....

1. Display (Cityscapes)

  • Using model DDRNet 1525 test sets, official MIOU =78.4069%
Average results
Class results1
Class results2
Class results3
  • Comparison of the original and predicted images
origin
label
predict

2. Install

pip install -r requirements.txt
Experimental environment:

  • Ubuntu 16.04 Nvidia-Cards >= 1
  • python==3.6.5
  • See Dependency Installation Package for details in requirement.txt

3. Model

All the modeling is done in builders/model_builder.py

  • FCN
  • FCN_ResNet
  • SegNet
  • UNet
  • BiSeNet
  • BiSeNetV2
  • PSPNet
  • DeepLabv3_plus
  • HRNet
  • DDRNet
Model Backbone Val mIoU Test mIoU Imagenet Pretrain Pretrained Model
PSPNet ResNet 50 76.54% - PSPNet
DeeplabV3+ ResNet 50 77.78% - DeeplabV3+
DDRNet23_slim - DDRNet23_slim_imagenet
DDRNet23 - DDRNet23_imagenet
DDRNet39 - 79.63% - DDRNet39_imagenet DDRNet39
Updating more model.......

4. Data preprocessing

This project enables you to expose data sets: CityscapesISPRS
The data set is uploaded later .....
Cityscapes data set preparation is shown here:

4.1 Download the dataset

Download the dataset from the link on the website, You can get *leftImg8bit.png suffix of original image under folder leftImg8bit, a) *color.pngb) *labelIds.pngc) *instanceIds.png suffix of fine labeled image under folder gtFine.

*leftImg8bit.png          : the origin picture
a) *color.png             : the class is encoded by its color
b) *labelIds.png          : the class is encoded by its ID
c) *instanceIds.png       : the class and the instance are encoded by an instance ID

4.2 Onehot encoding of label image

The real label gray scale image Onehot encoding used by the semantic segmentation task is 0-18, so the label needs to be encoded. Using scripts dataset/cityscapes/cityscapes_scripts/process_cityscapes.py to process the image and get the result *labelTrainIds.png. process_cityscapes.py usage: Modify 486 lines `Cityscapes_path'is the path to store your own data.

  • Comparison of original image, color label image and gray label image (0-18)
***_leftImg8bit
***_gtFine_color
***_gtFine_labelTrainIds
  • Local storage path display /data/open_data/cityscapes/:
data
  |--open_data
        |--cityscapes
               |--leftImg8bit
                    |--train
                        |--cologne
                        |--*******
                    |--val
                        |--*******
                    |--test
                        |--*******
               |--gtFine
                    |--train
                        |--cologne
                        |--*******
                    |--val
                        |--*******
                    |--test
                        |--*******

4.3 Generate image path

  • Generate a txt containing the image path
    Use script dataset/generate_txt.py to generate the path txt file containing the original image and labels. A total of 3 txt files will be generated: cityscapes_train_list.txtcityscapes_val_list.txtcityscapes_test_list.txt, and copy the three files to the dataset root directory.
data
  |--open_data
        |--cityscapes
               |--cityscapes_train_list.txt
               |--cityscapes_val_list.txt
               |--cityscapes_test_list.txt
               |--leftImg8bit
                    |--train
                        |--cologne
                        |--*******
                    |--val
                        |--*******
                    |--test
                        |--*******
               |--gtFine
                    |--train
                        |--cologne
                        |--*******
                    |--val
                        |--*******
                    |--test
                        |--*******
  • The contents of the txt are shown as follows:
leftImg8bit/train/cologne/cologne_000000_000019_leftImg8bit.png gtFine/train/cologne/cologne_000000_000019_gtFine_labelTrainIds.png
leftImg8bit/train/cologne/cologne_000001_000019_leftImg8bit.png gtFine/train/cologne/cologne_000001_000019_gtFine_labelTrainIds.png
..............
  • The format of the txt are shown as follows:
origin image path + the separator '\t' + label path +  the separator '\n'

TODO.....

5. How to train

sh train.sh

5.1 Parameters

python -m torch.distributed.launch --nproc_per_node=2 \
                train.py --model PSPNet_res50 --out_stride 8 \
                --max_epochs 200 --val_epochs 20 --batch_size 4 --lr 0.01 --optim sgd --loss ProbOhemCrossEntropy2d \
                --base_size 768 --crop_size 768  --tile_hw_size 768,768 \
                --root '/data/open_data' --dataset cityscapes --gpus_id 1,2

6. How to validate

sh predict.sh
Comments
  • size doesn't match error when run the train.py

    size doesn't match error when run the train.py

    sorry to disturb you, when i run train.py, it goes wrong, and i print the size of the two tensor, there are same, i can't find the resolution, how to solve it? `******* Begining traing *******


    Epoch 0/300: 0%| | 0/909 [00:00<?, ?it/s]/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:1350: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.") Traceback (most recent call last): File "/home/yeluyue/dl/bottle_Segmentation/train.py", line 415, in train_model(args) File "/home/yeluyue/dl/bottle_Segmentation/train.py", line 293, in train_model lossTr, lr = train(args, trainLoader, model, criteria, optimizer, epoch) File "/home/yeluyue/dl/bottle_Segmentation/train.py", line 117, in train loss = criterion(output, labels) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/yeluyue/dl/bottle_Segmentation/tools/loss.py", line 27, in forward return self.loss(outputs, targets) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 498, in forward return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 2047, in binary_cross_entropy new_size = _infer_size(target.size(), weight.size()) RuntimeError: The size of tensor a (512) must match the size of tensor b (2) at non-singleton dimension 2`

    opened by lj107024 4
  • AttributeError in HRNet

    AttributeError in HRNet

    When I run ./model/HRNet.py on Ubuntu 18.04, torch 1.8.0+cu111, the error raise as follows,

    /home/vgc/users/lwz/code/rice_seg/template/Segmentation-Pytorch/model/HRNet.py:329: DeprecationWarning: np.int is a deprecated alias for the builtin int. To silence this warning, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations last_inp_channels = np.int(np.sum(pre_stage_channels)) Traceback (most recent call last): File "/home/vgc/users/lwz/code/rice_seg/template/Segmentation-Pytorch/model/HRNet.py", line 520, in summary(model, (3, 512, 512), device="cpu") File "/home/vgc/anaconda3/envs/lwz37/lib/python3.7/site-packages/torchsummary/torchsummary.py", line 72, in summary model(*x) File "/home/vgc/anaconda3/envs/lwz37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/vgc/users/lwz/code/rice_seg/template/Segmentation-Pytorch/model/HRNet.py", line 447, in forward y_list = self.stage2(x_list) File "/home/vgc/anaconda3/envs/lwz37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/vgc/anaconda3/envs/lwz37/lib/python3.7/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/home/vgc/anaconda3/envs/lwz37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 893, in _call_impl hook_result = hook(self, input, result) File "/home/vgc/anaconda3/envs/lwz37/lib/python3.7/site-packages/torchsummary/torchsummary.py", line 19, in hook summary[m_key]["input_shape"] = list(input[0].size()) AttributeError: 'list' object has no attribute 'size'

    opened by rrryan2016 2
  • bug about generate_txt.py

    bug about generate_txt.py

    in line 26 filename_gt = filename.replace('leftImg8bit', 'gtFine') should change to filename_gt = filename.replace('leftImg8bit', 'gtFine').replace('.png','_labelTrainIds.png') to produce the format of leftImg8bit/train/cologne/cologne_000001_000019_leftImg8bit.png gtFine/train/cologne/cologne_000001_000019_gtFine_labelTrainIds.png

    the result will be leftImg8bit/train/cologne/cologne_000001_000019_leftImg8bit.png gtFine/train/cologne/cologne_000001_000019_gtFine.png before. And the program wont find the file.

    if I am wrong, plz tell me. Best wishes.

    opened by songzijiang 2
  • 使用ENet模型,在train时正常,在pridict时会出现超出内存。

    使用ENet模型,在train时正常,在pridict时会出现超出内存。

    在train时正常,在pridict时会出现: RuntimeError.CUDA out of memory. Tried to allocate 188.00 MiB (GPU 0; 6.00 GiB total capacity; 4.21 GiB already allocated; 63.85 MiB free; 81.86 MiB cached) 使用predict_sliding时会出现: 发生异常: TypeError init() got an unexpected keyword argument 'std' 请问怎么解决?

    opened by FelixJiao 2
  • Bump opencv-python from 4.1.0.25 to 4.2.0.32

    Bump opencv-python from 4.1.0.25 to 4.2.0.32

    Bumps opencv-python from 4.1.0.25 to 4.2.0.32.

    Release notes

    Sourced from opencv-python's releases.

    4.2.0.32

    OpenCV version 4.2.0.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    4.1.2.30

    OpenCV version 4.1.2.

    Changes:

    • Python 3.8 builds added to the build matrix
    • Support for Python 3.4 builds dropped (Python 3.4 is in EOL)
    • multibuild updated
    • minor build logic changes
    • Docker images rebuilt

    Notes:

    Please note that Python 2.7 enters into EOL phase in January 2020. opencv-python Python 2.7 wheels won't be provided after that.

    4.1.1.26

    OpenCV version 4.1.1.

    Changes:

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump opencv-python from 4.1.0.25 to 4.1.1.26

    Bump opencv-python from 4.1.0.25 to 4.1.1.26

    Bumps opencv-python from 4.1.0.25 to 4.1.1.26.

    Release notes

    Sourced from opencv-python's releases.

    4.1.1.26

    OpenCV version 4.1.1.

    Changes:

    • FFmpeg has been compiled with https support on Linux builds #229
    • CI build logic related changes #197, #227, #228
    • Custom libjepg-turbo removed because it's provided by OpenCV #231
    • 64-bit Qt builds are now smaller #236
    • Custom builds should be now rather easy to do locally #235:
      1. Clone this repository
      2. Optional: set up ENABLE_CONTRIB and ENABLE_HEADLESS environment variables to 1 if needed
      3. Optional: add additional Cmake arguments to CMAKE_ARGS environment variable
      4. Run python setup.py bdist_wheel
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump numpy from 1.15.1 to 1.22.0

    Bump numpy from 1.15.1 to 1.22.0

    Bumps numpy from 1.15.1 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v1.0.0)
Owner
Deeachain
Graduate students from outer space
Deeachain
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Air Quality Prediction Using LSTM

AirQualityPredictionUsingLSTM In this Repo, i present to you the winning solution of smart gujarat hackathon 2019 where the task was to predict the qu

Deepak Nandwani 2 Dec 13, 2022
This repository contains answers of the Shopify Summer 2022 Data Science Intern Challenge.

Data-Science-Intern-Challenge This repository contains answers of the Shopify Summer 2022 Data Science Intern Challenge. Summer 2022 Data Science Inte

1 Jan 11, 2022
Seg-Torch for Image Segmentation with Torch

Seg-Torch for Image Segmentation with Torch This work was sparked by my personal research on simple segmentation methods based on deep learning. It is

Eren Gölge 37 Dec 12, 2022
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Facebook Research 504 Jan 02, 2023
Fully convolutional networks for semantic segmentation

FCN-semantic-segmentation Simple end-to-end semantic segmentation using fully convolutional networks [1]. Takes a pretrained 34-layer ResNet [2], remo

Kai Arulkumaran 186 Dec 25, 2022
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Ömer BORHAN 75 Dec 05, 2022
Pytorch implementation of SimSiam Architecture

SimSiam-pytorch A simple pytorch implementation of Exploring Simple Siamese Representation Learning which is developed by Facebook AI Research (FAIR)

Saeed Shurrab 1 Oct 20, 2021
Code for Discriminative Sounding Objects Localization (NeurIPS 2020)

Discriminative Sounding Objects Localization Code for our NeurIPS 2020 paper Discriminative Sounding Objects Localization via Self-supervised Audiovis

51 Dec 11, 2022
Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai

Coursera-deep-learning-specialization - Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks an

Aman Chadha 1.7k Jan 08, 2023
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 1.9.0 ubuntu20/python3.9/pip ubuntu20/python3.8/p

ESPnet 5.9k Jan 04, 2023
Semantic-aware Grad-GAN for Virtual-to-Real Urban Scene Adaption

SG-GAN TensorFlow implementation of SG-GAN. Prerequisites TensorFlow (implemented in v1.3) numpy scipy pillow Getting Started Train Prepare dataset. W

lplcor 61 Jun 07, 2022
AdaFocus (ICCV 2021) Adaptive Focus for Efficient Video Recognition

AdaFocus (ICCV 2021) This repo contains the official code and pre-trained models for AdaFocus. Adaptive Focus for Efficient Video Recognition Referenc

Rainforest Wang 115 Dec 21, 2022
Automatic Data-Regularized Actor-Critic (Auto-DrAC)

Auto-DrAC: Automatic Data-Regularized Actor-Critic This is a PyTorch implementation of the methods proposed in Automatic Data Augmentation for General

89 Dec 13, 2022
Fully Convolutional Refined Auto Encoding Generative Adversarial Networks for 3D Multi Object Scenes

Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks for 3D Multi Object Scenes This repository contains the source code for Full

Yu Nishimura 106 Nov 21, 2022
Vision-and-Language Navigation in Continuous Environments using Habitat

Vision-and-Language Navigation in Continuous Environments (VLN-CE) Project Website — VLN-CE Challenge — RxR-Habitat Challenge Official implementations

Jacob Krantz 132 Jan 02, 2023
This repo. is an implementation of ACFFNet, which is accepted for in Image and Vision Computing.

Attention-Guided-Contextual-Feature-Fusion-Network-for-Salient-Object-Detection This repo. is an implementation of ACFFNet, which is accepted for in I

5 Nov 21, 2022
Laplace Redux -- Effortless Bayesian Deep Learning

Laplace Redux - Effortless Bayesian Deep Learning This repository contains the code to run the experiments for the paper Laplace Redux - Effortless Ba

Runa Eschenhagen 28 Dec 07, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
Facial Image Inpainting with Semantic Control

Facial Image Inpainting with Semantic Control In this repo, we provide a model for the controllable facial image inpainting task. This model enables u

Ren Yurui 8 Nov 22, 2021