Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Overview

ZF_UNET_224 Pretrained Model

Modification of convolutional neural net "UNET" for image segmentation in Keras framework

Requirements

Python 3.*, Keras 2.1, Tensorflow 1.4

Usage

from zf_unet_224_model import ZF_UNET_224, dice_coef_loss, dice_coef
from keras.optimizers import Adam

model = ZF_UNET_224(weights='generator')
optim = Adam()
model.compile(optimizer=optim, loss=dice_coef_loss, metrics=[dice_coef])

model.fit(...)

Notes

Pretrained weights

Download: Weights for Tensorflow backend ~123 MB (Keras 2.1, Dice coef: 0.998)

Weights were obtained with random image generator (generator code available here: train_infinite_generator.py). See example of images from generator below.

Example of images from generator

Dice coefficient for pretrained weights: ~0.998. See history of learning below:

Log of dice coefficient during training process

Comments
  • Extended example

    Extended example

    Hi, I have created extended example based on your repository: https://github.com/mrgloom/keras-semantic-segmentation-example

    It also use random colors for foreground and background (not like lighter and darker like here https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/train_infinite_generator.py#L24 ), one idea behind it is that in that case network can learn 'shape of object' not just 'thresholding and separating background and foreground', also looks like using random colors make problem harder and network converges slower.

    Also I have experienced some problems:

    1. Netwoks not always converges on second run with fixed params even for this toy problem, looks like it depens on random seed.
    2. Dice loss and jaccard loss are harder to train than binary crossentropy, any ideas why? Network architecture is the same just loss differs, I even tried to load trained weights from binary crossentropy loss network and use them in dice loss network which show high dice coef.
    opened by mrgloom 8
  • Deeper network

    Deeper network

    I know this is not an issue, but I wanted to contact you to know how did you make the network deeper in keras for the DSTL competition using this model?

    opened by nassarofficial 6
  • Tensorflow problem

    Tensorflow problem

    When I use tensorflow-1.3.0 as backend, I get this kind of error:

    builtins.ValueError: Dimension 2 in both shapes must be equal, but are 3 and 32 for 'Assign' (op: 'Assign') with input shapes: [3,3,3,32], [3,3,32,3].
    
    opened by lawlite19 5
  • preprocess_batch for real data

    preprocess_batch for real data

    Here is preprocessing for the batch (looks like 256 should be 255 ;) ) https://github.com/ZFTurbo/ZF_UNET_224_Pretrained_Model/blob/master/zf_unet_224_model.py#L27

    Is it ok for real images to use code like this or it should be calculated for entire dataset?

    batch=batch-np.mean(batch)
    batch=batch/np.std(batch)
    

    Also how crucial is impact of data normalization for U-net? In my tests even on this simple synthetic data network doesn't converges if input is not normalized.

    opened by mrgloom 2
  • Applying pretrained weights to 128*128 size image

    Applying pretrained weights to 128*128 size image

    You have generated pretrained weights for 224224 input size, but I have 128128. How can we use such weights in this situation, but without padding/upsampling 128*128 images. Sorry for silly question - is it worth trying in kaggle salt competition?

    opened by Diyago 1
  • Attribute Error

    Attribute Error

    Traceback (most recent call last): File "train.py", line 11, in import segmentation_models as sm File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 98, in set_framework(_framework) File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/segmentation_models/init.py", line 68, in set_framework import efficientnet.keras # init custom objects File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/keras.py", line 17, in init_keras_custom_objects() File "/home/melih/anaconda3/envs/ai/lib/python3.6/site-packages/efficientnet/init.py", line 71, in init_keras_custom_objects keras.utils.generic_utils.get_custom_objects().update(custom_objects) AttributeError: module 'keras.utils' has no attribute 'generic_utils'

    when I run the code, I got the result below but don't know why there is no generic_utils attribute in the library since there is in the keras.

    opened by melih1996 0
  • How to run the model for 6 input channels?

    How to run the model for 6 input channels?

    Is it possible to run the model for 6 input channels? Three inputs in that are RGB values and the other three are metrics I want to pass on into the architecture for my use case.

    opened by ShreyaPandita01 2
  • dice and jaccard metrics

    dice and jaccard metrics

    Thanks for the repo. I am wondering why do you use a smoothing factor of 1.0 in both dice and jaccard coefficients? Where does this value comes from? And what about using another smaller value close to zero, e.g. K.epsilon()

    opened by tinalegre 3
  • model.fit step

    model.fit step

    Hi! I would like to know how I should perform the model.fit instruction. model.fit(trainSet, mask_trainSet, batch_size=20, nb_epoch=1, verbose=1,validation_split=0.2, shuffle=True, callbacks=[model_checkpoint])¿? What I write in callback??

    And how should I use the weights if I wan't to use pretained weights??

    Thank you very much and sorry for the inconvenience!

    opened by AmericaBG 7
  • How to generate img and mask correctly

    How to generate img and mask correctly

    I run your code and then find that the img batch has a shape(16,224,224,3),but mask batch has a shape(16,1,224,224). I don't understand it.Can you explain it to me?I use my dataset to train unet and then the dice coef is high,but the real effect is bad.

    opened by wong-way 6
Releases(v1.0)
Implementation of Kronecker Attention in Pytorch

Kronecker Attention Pytorch Implementation of Kronecker Attention in Pytorch. Results look less than stellar, but if someone found some context where

Phil Wang 16 May 06, 2022
Repositorio oficial del curso IIC2233 Programación Avanzada 🚀✨

IIC2233 - Programación Avanzada Evaluación Las evaluaciones serán efectuadas por medio de actividades prácticas en clases y tareas. Se calculará la no

IIC2233 @ UC 47 Sep 06, 2022
Object tracking and object detection is applied to track golf puts in real time and display stats/games.

Putting_Game Object tracking and object detection is applied to track golf puts in real time and display stats/games. Works best with the Perfect Prac

Max 1 Dec 29, 2021
TensorFlow implementation of Deep Reinforcement Learning papers

Deep Reinforcement Learning in TensorFlow TensorFlow implementation of Deep Reinforcement Learning papers. This implementation contains: [1] Playing A

Taehoon Kim 1.6k Jan 03, 2023
A concise but complete implementation of CLIP with various experimental improvements from recent papers

x-clip (wip) A concise but complete implementation of CLIP with various experimental improvements from recent papers Install $ pip install x-clip Usag

Phil Wang 515 Dec 26, 2022
auto-tuning momentum SGD optimizer

YellowFin YellowFin is an auto-tuning optimizer based on momentum SGD which requires no manual specification of learning rate and momentum. It measure

Jian Zhang 288 Nov 19, 2022
Real-CUGAN - Real Cascade U-Nets for Anime Image Super Resolution

Real Cascade U-Nets for Anime Image Super Resolution 中文 | English 🔥 Real-CUGAN

tarsin 111 Dec 28, 2022
"Exploring Vision Transformers for Fine-grained Classification" at CVPRW FGVC8

FGVC8 Exploring Vision Transformers for Fine-grained Classification paper presented at the CVPR 2021, The Eight Workshop on Fine-Grained Visual Catego

Marcos V. Conde 19 Dec 06, 2022
PyTorch implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy

Anomaly Transformer in PyTorch This is an implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. This pape

spencerbraun 160 Dec 19, 2022
Official Pytorch implementation of "Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021)

Unbiased Classification Through Bias-Contrastive and Bias-Balanced Learning (NeurIPS 2021) Official Pytorch implementation of Unbiased Classification

Youngkyu 17 Jan 01, 2023
A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution.

Awesome Pretrained StyleGAN2 A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution. Note the readme is a

Justin 1.1k Dec 24, 2022
Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals"

The Temporal Robustness of Stochastic Signals Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals" Case stud

0 Oct 28, 2021
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

TorchRL Disclaimer This library is not officially released yet and is subject to change. The features are available before an official release so that

Meta Research 860 Jan 07, 2023
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device" @ CAD&Graphics2019

PortraitNet Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device". @ CAD&Graphics 2019 Introduction We propose a

265 Dec 01, 2022
Apollo optimizer in tensorflow

Apollo Optimizer in Tensorflow 2.x Notes: Warmup is important with Apollo optimizer, so be sure to pass in a learning rate schedule vs. a constant lea

Evan Walters 1 Nov 09, 2021
An expansion for RDKit to read all types of files in one line

RDMolReader An expansion for RDKit to read all types of files in one line How to use? Add this single .py file to your project and import MolFromFile(

Ali Khodabandehlou 1 Dec 18, 2021
Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, Pattern Recognition

USDAN The implementation of Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, which is accepte

11 Nov 03, 2022
Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions

README Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an

Yousef Emam 13 Nov 24, 2022