Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Overview

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral)

Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibel2

1York University   2Google Research

Paper | Poster | PPT | Video

C5_teaser

Reference code for the paper Cross-Camera Convolutional Color Constancy. Mahmoud Afifi, Jonathan T. Barron, Chloe LeGendre, Yun-Ta Tsai, and Francois Bleibel. In ICCV, 2021. If you use this code, please cite our paper:

@InProceedings{C5,
  title={Cross-Camera Convolutional Color Constancy},
  author={Afifi, Mahmoud and Barron, Jonathan T and LeGendre, Chloe and Tsai, Yun-Ta and Bleibel, Francois},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

C5_figure

Code

Prerequisite

  • Pytorch
  • opencv-python
  • tqdm

Training

To train C5, training/validation data should have the following formatting:

- train_folder/
       | image1_sensorname_camera1.png
       | image1_sensorname_camera1_metadata.json
       | image2_sensorname_camera1.png
       | image2_sensorname_camera1_metadata.json
       ...
       | image1_sensorname_camera2.png
       | image1_sensorname_camera2_metadata.json
       ...

In src/ops.py, the function add_camera_name(dataset_dir) can be used to rename image filenames and corresponding ground-truth JSON files. Each JSON file should include a key named either illuminant_color_raw or gt_ill that has the ground-truth illuminant color of the corresponding image.

The training code is given in train.py. The following parameters are required to set model configuration and training data information.

  • --data-num: the number of images used for each inference (additional images + input query image). This was mentioned in the main paper as m.
  • --input-size: number of histogram bins.
  • --learn-G: to use a G multiplier as explained in the paper.
  • --training-dir-in: training image directory.
  • --validation-dir-in: validation image directory; when this variable is None (default), the validation set will be taken from the training data based on the --validation-ratio.
  • --validation-ratio: when --validation-dir-in is None, this argument determines the validation set ratio of the image set in --training-dir-in directory.
  • --augmentation-dir: directory(s) of augmentation data (optional).
  • --model-name: name of the trained model.

The following parameters are useful to control training settings and hyperparameters:

  • --epochs: number of epochs
  • --batch-size: batch size
  • --load-hist: to load histogram if pre-computed (recommended).
  • -optimizer: optimization algorithm for stochastic gradient descent; options are: Adam or SGD.
  • --learning-rate: Learning rate
  • --l2reg: L2 regularization factor
  • --load: to load C5 model from a .pth file; default is False
  • --model-location: when --load is True, this variable should point to the fullpath of the .pth model file.
  • --validation-frequency: validation frequency (in epochs).
  • --cross-validation: To use three-fold cross-validation. When this variable is True, --validation-dir-in and --validation-ratio will be ignored and 3-fold cross-validation, on the data provided in the --training-dir-in, will be applied.
  • --gpu: GPU device ID.
  • --smoothness-factor-*: smoothness loss factor of the following model components: F (conv filter), B (bias), G (multiplier layer). For example, --smoothness-factor-F can be used to set the smoothness loss for the conv filter.
  • --increasing-batch-size: for increasing batch size during training.
  • --grad-clip-value: gradient clipping value; if it's set to 0 (default), no clipping is applied.

Testing

To test a pre-trained C5 model, testing data should have the following formatting:

- test_folder/
       | image1_sensorname_camera1.png
       | image1_sensorname_camera1_metadata.json
       | image2_sensorname_camera1.png
       | image2_sensorname_camera1_metadata.json
       ...
       | image1_sensorname_camera2.png
       | image1_sensorname_camera2_metadata.json
       ...

The testing code is given in test.py. The following parameters are required to set model configuration and testing data information.

  • --model-name: name of the trained model.
  • --data-num: the number of images used for each inference (additional images + input query image). This was mentioned in the main paper as m.
  • --input-size: number of histogram bins.
  • --g-multiplier: to use a G multiplier as explained in the paper.
  • --testing-dir-in: testing image directory.
  • --batch-size: batch size
  • --load-hist: to load histogram if pre-computed (recommended).
  • --multiple_test: to apply multiple tests (ten as mentioned in the paper) and save their results.
  • --white-balance: to save white-balanced testing images.
  • --cross-validation: to use three-fold cross-validation. When it is set to True, it is supposed to have three pre-trained models saved with a postfix of the fold number. The testing image filenames should be listed in .npy files located in the folds directory with the same name of the dataset, which should be the same as the folder name in --testing-dir-in.
  • --gpu: GPU device ID.

In the images directory, there are few examples captured by Mobile Sony IMX135 from the INTEL-TAU dataset. To white balance these raw images, as shown in the figure below, using a C5 model (trained on DSLR cameras from NUS and Gehler-Shi datasets), use the following command:

python test.py --testing-dir-in ./images --white-balance True --model-name C5_m_7_h_64

c5_examples

To test with the gain multiplie, use the following command:

python test.py --testing-dir-in ./images --white-balance True --g-multiplier True --model-name C5_m_7_h_64_w_G

Note that in testing, C5 does not require any metadata. The testing code only uses JSON files to load ground-truth illumination for comparisons with our estimated values.

Data augmentation

The raw-to-raw augmentation functions are provided in src/aug_ops.opy. Call the set_sampling_params function to set sampling parameters (e.g., excluding certain camera/dataset from the soruce set, determine the number of augmented images, etc.). Then, call the map_raw_images function to generate a new augmentation set with the determined parameters. The function map_raw_images takes four arguments:

  • xyz_img_dir: directory of XYZ images; you can download the CIE XYZ images from here. All images were transformed to the CIE XYZ space after applying the black-level normalization and masking out the calibration object (i.e., the color rendition chart or SpyderCUBE).
  • target_cameras: a list of one or more of the following camera models: Canon EOS 550D, Canon EOS 5D, Canon EOS-1DS, Canon EOS-1Ds Mark III, Fujifilm X-M1, Nikon D40, Nikon D5200, Olympus E-PL6, Panasonic DMC-GX1, Samsung NX2000, Sony SLT-A57, or All.
  • output_dir: output directory to save the augmented images and their metadata files.
  • params: sampling parameters set by the set_sampling_params function.
Owner
Mahmoud Afifi
Mahmoud Afifi
A High-Performance Distributed Library for Large-Scale Bundle Adjustment

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment This repo contains an official implementation of MegBA. MegBA is a

旷视研究院 3D 组 336 Dec 27, 2022
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022
Human-Pose-and-Motion History

Human Pose and Motion Scientist Approach Eadweard Muybridge, The Galloping Horse Portfolio, 1887 Etienne-Jules Marey, Descent of Inclined Plane, Chron

Daito Manabe 47 Dec 16, 2022
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention

AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet buil

3.4k Jan 07, 2023
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
Determined: Deep Learning Training Platform

Determined: Deep Learning Training Platform Determined is an open-source deep learning training platform that makes building models fast and easy. Det

Determined AI 2k Dec 31, 2022
UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.

Unified Multi-modal Transformers This repository maintains the official implementation of the paper UMT: Unified Multi-modal Transformers for Joint Vi

Applied Research Center (ARC), Tencent PCG 84 Jan 04, 2023
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
MapReader: A computer vision pipeline for the semantic exploration of maps at scale

MapReader A computer vision pipeline for the semantic exploration of maps at scale MapReader is an end-to-end computer vision (CV) pipeline designed b

Living with Machines 25 Dec 26, 2022
BBB streaming without Xorg and Pulseaudio and Chromium and other nonsense (heavily WIP)

BBB Streamer NG? Makes a conference like this... ...streamable like this! I also recorded a small video showing the basic features: https://www.youtub

Lukas Schauer 60 Oct 21, 2022
Normal Learning in Videos with Attention Prototype Network

Codes_APN Official codes of CVPR21 paper: Normal Learning in Videos with Attention Prototype Network (https://arxiv.org/abs/2108.11055) Overview of ou

11 Dec 13, 2022
When in Doubt: Improving Classification Performance with Alternating Normalization

When in Doubt: Improving Classification Performance with Alternating Normalization Findings of EMNLP 2021 Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoa

Menglin Jia 13 Nov 06, 2022
Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21'

Argument Extraction by Generation Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21' Dependencies pytorch=1.6 tr

Zoey Li 87 Dec 26, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
Paaster is a secure by default end-to-end encrypted pastebin built with the objective of simplicity.

Follow the development of our desktop client here Paaster Paaster is a secure by default end-to-end encrypted pastebin built with the objective of sim

Ward 211 Dec 25, 2022
This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Mixture of Volumetric Primitives -- Training and Evaluation This repository contains code to train and render Mixture of Volumetric Primitives (MVP) m

Meta Research 125 Dec 29, 2022
Not Suitable for Work (NSFW) classification using deep neural network Caffe models.

Open nsfw model This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models. Please refer our blo

Yahoo 5.6k Jan 05, 2023
MNIST, but with Bezier curves instead of pixels

bezier-mnist This is a work-in-progress vector version of the MNIST dataset. Samples Here are some samples from the training set. Note that, while the

Alex Nichol 15 Jan 16, 2022