CONditionals for Ordinal Regression and classification in tensorflow

Overview

Condor Ordinal regression in Tensorflow Keras

Continuous Integration License Python 3

Tensorflow Keras implementation of CONDOR Ordinal Regression (aka ordinal classification) by Garrett Jenkinson et al (2021).

CONDOR is compatible with any state-of-the-art deep neural network architecture, requiring only modification of the output layer, the labels, and the loss function. Read our full documentation to learn more.

We also have implemented CONDOR for pytorch.

This package includes:

  • Ordinal tensorflow loss function: CondorOrdinalCrossEntropy
  • Ordinal tensorflow error metric: OrdinalMeanAbsoluteError
  • Ordinal tensorflow error metric: OrdinalEarthMoversDistance
  • Ordinal tensorflow sparse loss function: CondorSparseOrdinalCrossEntropy
  • Ordinal tensorflow sparse error metric: SparseOrdinalMeanAbsoluteError
  • Ordinal tensorflow sparse error metric: SparseOrdinalEarthMoversDistance
  • Ordinal tensorflow activation function: ordinal_softmax
  • Ordinal sklearn label encoder: CondorOrdinalEncoder

Installation

Install the stable version via pip:

pip install condor-tensorflow

Alternatively install the most recent code on GitHub via pip:

pip install git+https://github.com/GarrettJenkinson/condor_tensorflow/

condor_tensorflow should now be available for use as a Python library.

Docker container

As an alternative to the above, we provide a convenient Dockerfile that will build a container with condor_tensorflow along with all of its dependencies (Python 3.6+, Tensorflow 2.2+, sklearn, numpy). This can be used as follows:

# Clone this git repository
git clone https://github.com/GarrettJenkinson/condor_tensorflow/

# Change directory to the cloned repository root
cd condor_tensorflow

# Create a docker image
docker build -t cpu_tensorflow -f cpu.Dockerfile ./

# run image to serve a jupyter notebook 
docker run -it -p 8888:8888 --rm cpu_tensorflow

# how to run bash inside container (with Python that will have required dependencies available)
docker run -u $(id -u):$(id -g) -it -p 8888:8888 --rm cpu_tensorflow bash

Assuming a GPU enabled machine with NVIDIA drivers installed replace cpu above with gpu.

Example

This is a quick example to show basic model implementation syntax.
Example assumes existence of input data (variable 'X') and ordinal labels (variable 'labels').

import tensorflow as tf
import condor_tensorflow as condor
NUM_CLASSES = 5
# Ordinal 'labels' variable has 5 labels, 0 through 4.
enc_labs = condor.CondorOrdinalEncoder(nclasses=NUM_CLASSES).fit_transform(labels)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, activation = "relu"))
model.add(tf.keras.layers.Dense(NUM_CLASSES-1)) # Note the "-1"
model.compile(loss = condor.CondorOrdinalCrossEntropy(),
              metrics = [condor.OrdinalMeanAbsoluteError()])
model.fit(x = X, y = enc_labs)

See this colab notebook for extended examples of ordinal regression with MNIST and Amazon reviews (universal sentence encoder).

Please post any issues to the issue queue.

Acknowledgments: Many thanks to the CORAL ordinal authors and the CORAL pytorch authors whose repos provided a roadmap for this codebase.

References

Jenkinson, Khezeli, Oliver, Kalantari, Klee. Universally rank consistent ordinal regression in neural networks, arXiv:2110.07470, 2021.

Comments
  • providing weighted metric  causes error

    providing weighted metric causes error

    example code:

    compileOptions = {
    'optimizer': tf.keras.optimizers.Adam(learning_rate=5e-4),
    'loss': condor.CondorOrdinalCrossEntropy(),
    'metrics': [
                condor.OrdinalEarthMoversDistance(name='condorErrOrdinalMoversDist'),
                condor.OrdinalMeanAbsoluteError(name='ordinalMAbsErr')
                ]
    'weighted_metrics': [
                condor.OrdinalEarthMoversDistance(name='condorErrOrdinalMoversDist'),
                condor.OrdinalMeanAbsoluteError(name='ordinalMAbsErr')
                ]
    }
    
    model.compile(**compileOptions)
    model.fit(x=X_train,y=Y_train,batch_size=32,epochs=100,validation_data=(x_val, y_val, val_sample_weights), sample_weight=sampleweight_train)
    
    

    would generate the following error:

    
        File "/usr/local/lib/python3.7/dist-packages/condor_tensorflow/metrics.py", line 24, in update_state  *
            if sample_weight:
    
        ValueError: condition of if statement expected to be `tf.bool` scalar, got Tensor("ExpandDims_1:0", shape=(None, 1), dtype=float32); to use as boolean Tensor, use `tf.cast`; to check for None, use `is not None`
    

    If I don't provide weighted_metrics in model.compile option but remain to use sample_weight=sampleweight_train argument in model.fit, no errors would show up.

    Thank you!

    enhancement 
    opened by tingjhenjiang 7
  • loss reduction support

    loss reduction support

    While I want to do a distributed training including training on Google Colab TPU, errors as shown below would occurs:

    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
        528     self._self_setattr_tracking = False  # pylint: disable=protected-access
        529     try:
    --> 530       result = method(self, *args, **kwargs)
        531     finally:
        532       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs)
        434           targets=self._targets,
        435           skip_target_masks=self._prepare_skip_target_masks(),
    --> 436           masks=self._prepare_output_masks())
        437 
        438       # Prepare sample weight modes. List with the same length as model outputs.
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py in _handle_metrics(self, outputs, targets, skip_target_masks, sample_weights, masks, return_weighted_metrics, return_weighted_and_unweighted_metrics)
       1962           metric_results.extend(
       1963               self._handle_per_output_metrics(self._per_output_metrics[i],
    -> 1964                                               target, output, output_mask))
       1965         if return_weighted_and_unweighted_metrics or return_weighted_metrics:
       1966           metric_results.extend(
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py in _handle_per_output_metrics(self, metrics_dict, y_true, y_pred, mask, weights)
       1913       with backend.name_scope(metric_name):
       1914         metric_result = training_utils_v1.call_metric_function(
    -> 1915             metric_fn, y_true, y_pred, weights=weights, mask=mask)
       1916         metric_results.append(metric_result)
       1917     return metric_results
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_utils_v1.py in call_metric_function(metric_fn, y_true, y_pred, weights, mask)
       1175 
       1176   if y_pred is not None:
    -> 1177     return metric_fn(y_true, y_pred, sample_weight=weights)
       1178   # `Mean` metric only takes a single value.
       1179   return metric_fn(y_true, sample_weight=weights)
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in __call__(self, *args, **kwargs)
        235     from keras.distribute import distributed_training_utils  # pylint:disable=g-import-not-at-top
        236     return distributed_training_utils.call_replica_local_fn(
    --> 237         replica_local_fn, *args, **kwargs)
        238 
        239   def __str__(self):
    
    /usr/local/lib/python3.7/dist-packages/keras/distribute/distributed_training_utils.py in call_replica_local_fn(fn, *args, **kwargs)
         58     with strategy.scope():
         59       return strategy.extended.call_for_each_replica(fn, args, kwargs)
    ---> 60   return fn(*args, **kwargs)
         61 
         62 
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in replica_local_fn(*args, **kwargs)
        215         update_op = None
        216       else:
    --> 217         update_op = self.update_state(*args, **kwargs)  # pylint: disable=not-callable
        218       update_ops = []
        219       if update_op is not None:
    
    /usr/local/lib/python3.7/dist-packages/keras/utils/metrics_utils.py in decorated(metric_obj, *args, **kwargs)
         71 
         72     with tf_utils.graph_context_for_symbolic_tensors(*args, **kwargs):
    ---> 73       update_op = update_state_fn(*args, **kwargs)
         74     if update_op is not None:  # update_op will be None in eager execution.
         75       metric_obj.add_update(update_op)
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in update_state_fn(*args, **kwargs)
        175         control_status = tf.__internal__.autograph.control_status_ctx()
        176         ag_update_state = tf.__internal__.autograph.tf_convert(obj_update_state, control_status)
    --> 177         return ag_update_state(*args, **kwargs)
        178     else:
        179       if isinstance(obj.update_state, tf.__internal__.function.Function):
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
        694       try:
        695         with conversion_ctx:
    --> 696           return converted_call(f, args, kwargs, options=options)
        697       except Exception as e:  # pylint:disable=broad-except
        698         if hasattr(e, 'ag_error_metadata'):
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
        381 
        382   if not options.user_requested and conversion.is_allowlisted(f):
    --> 383     return _call_unconverted(f, args, kwargs, options)
        384 
        385   # internal_convert_user_code is for example turned off when issuing a dynamic
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
        462 
        463   if kwargs is not None:
    --> 464     return f(*args, **kwargs)
        465   return f(*args)
        466 
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in update_state(self, y_true, y_pred, sample_weight)
        723 
        724     ag_fn = tf.__internal__.autograph.tf_convert(self._fn, tf.__internal__.autograph.control_status_ctx())
    --> 725     matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
        726     return super(MeanMetricWrapper, self).update_state(
        727         matches, sample_weight=sample_weight)
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
        694       try:
        695         with conversion_ctx:
    --> 696           return converted_call(f, args, kwargs, options=options)
        697       except Exception as e:  # pylint:disable=broad-except
        698         if hasattr(e, 'ag_error_metadata'):
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
        381 
        382   if not options.user_requested and conversion.is_allowlisted(f):
    --> 383     return _call_unconverted(f, args, kwargs, options)
        384 
        385   # internal_convert_user_code is for example turned off when issuing a dynamic
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
        462 
        463   if kwargs is not None:
    --> 464     return f(*args, **kwargs)
        465   return f(*args)
        466 
    
    /usr/local/lib/python3.7/dist-packages/keras/losses.py in __call__(self, y_true, y_pred, sample_weight)
        141       losses = call_fn(y_true, y_pred)
        142       return losses_utils.compute_weighted_loss(
    --> 143           losses, sample_weight, reduction=self._get_reduction())
        144 
        145   @classmethod
    
    /usr/local/lib/python3.7/dist-packages/keras/losses.py in _get_reduction(self)
        182          self.reduction == losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE)):
        183       raise ValueError(
    --> 184           'Please use `tf.keras.losses.Reduction.SUM` or '
        185           '`tf.keras.losses.Reduction.NONE` for loss reduction when losses are '
        186           'used with `tf.distribute.Strategy` outside of the built-in training '
    
    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
    
    with strategy.scope():
        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
    

    it seems that support of loss reduction has not been implemented. It may be a little tricky, but it would be nice if you can add this enhancement.

    Thank you!

    enhancement 
    opened by tingjhenjiang 3
  • Importance weights.

    Importance weights.

    I had a question about the importance weights code below that was in one of the tutorial docs.

    Importance weights customization
    A quick example to show how the importance weights can be customized.
    model = create_model(num_classes = NUM_CLASSES)
    model.summary()
    # We have num_classes - 1 outputs (cumulative logits), so there are 9 elements
    # in the importance vector to customize.
    importance_weights = [1., 1., 0.5, 0.5, 0.5, 1., 1., 0.1, 0.1]
    loss_fn = condor.SparseCondorOrdinalCrossEntropy(importance_weights = importance_weights)
    model.compile(tf.keras.optimizers.Adam(lr = learning_rate), loss = loss_fn)
    history = model.fit(dataset, epochs = num_epochs)
    

    My problem:

    I have 5 classes, with underrepresentation of say the first and lass class. I want to use weights to assign higher importance to the underrepresented classes. In a dense layer with n(classes) == n(output_layers), the vector would look like.

    [1,0.5,0.5,0.5,1]

    With the CONDOR, using num_classes - 1 output layers, is it still possible to assign higher weights to underrepresented classes?

    I don't understand how to relate the N-1 output layers weights to the original weights where n(classes) == n(output_layers).

    Any feedback is appreciated.

    opened by jake-foxy 2
  • activation function at last layer

    activation function at last layer

    Hello, I've a dataset in which the labels are like (0,1,2,3). It means the number of classes in Y is 4.

    Method 1:

    Using the condor.CondorOrdinalEncoder(nclasses=4).fit_transform(labels) to transform labels to an array in shape (n, 3). [ [0,0,1],  [1,0,0] ] as model prediction objects. The last layer is tf.keras.layers.Dense(units=4-1), according to the readme, however by this design the default activation function of the last layer would be None/Linear( f(x) = x), and the output of the model would be simple logits. Should I keep the model outputs simple logits(no activation function)?

    Method 2:

    If I use tf.keras.layers.Dense(units=4-2, activation=condor.ordinal_softmax) as the last layer together with label data in shape (n, 3), would that be fine? (the condor.ordinal_softmax function would increase the number of dimension)

    Method 3: Or I should use tf.keras.layers.Dense(units=4-1, activation=condor.ordinal_softmax) as the last layer together with label data in shape (n, 4)?

    Which method is better? Thank you!

    opened by tingjhenjiang 2
  • Update labelencoder.py

    Update labelencoder.py

    When fitting data with nclass=0:

    1. self.feature_names_in_ would lose its functionality(the previous commit).
    2. Also, using sklearn.compose.ColumnTransformer to transform multiple columns with CondorOrdinalEncoder at a time would cause self.nclass changing in every transformation and thus the transformation would fail, and therefore it is necessary to differentiate.
    opened by tingjhenjiang 1
  • Upadate labelencoder.py add get_feature_names_out method

    Upadate labelencoder.py add get_feature_names_out method

    When I try to integrate sklearn.compose.ColumnTransformer, sklearn.pipeline with condor encoder, I find it difficult and errors happen due to lack of support. Therefore I add the support of get_feature_names_out method, which complies with the structure of sklearn.

    opened by tingjhenjiang 1
Releases(v1.0.1)
Data and analysis code for an MS on SK VOC genomes phenotyping/neutralisation assays

Description Summary of phylogenomic methods and analyses used in "Immunogenicity of convalescent and vaccinated sera against clinical isolates of ance

Finlay Maguire 1 Jan 06, 2022
Mememoji - A facial expression classification system that recognizes 6 basic emotions: happy, sad, surprise, fear, anger and neutral.

a project built with deep convolutional neural network and ❤️ Table of Contents Motivation The Database The Model 3.1 Input Layer 3.2 Convolutional La

Jostine Ho 761 Dec 05, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Code accompanying the paper "Knowledge Base Completion Meets Transfer Learning"

Knowledge Base Completion Meets Transfer Learning This code accompanies the paper Knowledge Base Completion Meets Transfer Learning published at EMNLP

14 Nov 27, 2022
Simultaneous NMT/MMT framework in PyTorch

This repository includes the codes, the experiment configurations and the scripts to prepare/download data for the Simultaneous Machine Translation wi

<a href=[email protected]"> 37 Sep 29, 2022
Tutorial for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop

Workshop Advantech Jetson Nano This tutorial has been designed for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop in collaboration with Adva

Edge Impulse 18 Nov 22, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++).

Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++). Built in TensorFlow 2.5. Configured for vox

Diagnostic Image Analysis Group 32 Dec 08, 2022
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
Deep-Learning-Book-Chapter-Summaries - Attempting to make the Deep Learning Book easier to understand.

Deep-Learning-Book-Chapter-Summaries This repository provides a summary for each chapter of the Deep Learning book by Ian Goodfellow, Yoshua Bengio an

Aman Dalmia 1k Dec 27, 2022
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
ScriptProfilerPy - Module to visualize where your python script is slow

ScriptProfiler helps you track where your code is slow It provides: Code lines t

Lucas BLP 3 Jun 02, 2022
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Changlin Li 127 Dec 26, 2022
This is 2nd term discrete maths project done by UCU students that uses backtracking to solve various problems.

Backtracking Project Sponsors This is a project made by UCU students: Olha Liuba - crossword solver implementation Hanna Yershova - sudoku solver impl

Dasha 4 Oct 17, 2021
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
Pytorch Implementation for (STANet+ and STANet)

Pytorch Implementation for (STANet+ and STANet) V2-Weakly Supervised Visual-Auditory Saliency Detection with Multigranularity Perception (arxiv), pdf:

GuotaoWang 14 Nov 29, 2022
Tesla Light Show xLights Guide With python

Tesla Light Show xLights Guide Welcome to the Tesla Light Show xLights guide! You can create and run your own light shows on Tesla vehicles. Running a

Tesla, Inc. 2.5k Dec 29, 2022