Official implementation for "Image Quality Assessment using Contrastive Learning"

Overview

Image Quality Assessment using Contrastive Learning

Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik

This is the official repository of the paper Image Quality Assessment using Contrastive Learning

Usage

The code has been tested on Linux systems with python 3.6. Please refer to requirements.txt for installing dependent packages.

Running CONTRIQUE

In order to obtain quality score using CONTRIQUE model, checkpoint needs to be downloaded. The following command can be used to download the checkpoint.

wget -L https://utexas.box.com/shared/static/rhpa8nkcfzpvdguo97n2d5dbn4qb03z8.tar -O models/CONTRIQUE_checkpoint25.tar -q --show-progress

Alternatively, the checkpoint can also be downloaded using this link.

Obtaining Quality Scores

We provide trained regressor models in models directory which can be used for predicting image quality using features obtained from CONTRIQUE model. For demonstration purposes, some sample images provided in the sample_images folder.

For blind quality prediction, the following commands can be used.

python3 demo_score.py --im_path sample_images/60.bmp --model_path models/CONTRIQUE_checkpoint25.tar --linear_regressor_path models/CLIVE.save
python3 demo_score.py --im_path sample_images/img66.bmp --model_path models/CONTRIQUE_checkpoint25.tar --linear_regressor_path models/LIVE.save

For Full-reference quality assessment, the folllowing command can be employed.

python3 demos_score_FR.py --ref_path sample_images/churchandcapitol.bmp --dist_path sample_images/img66.bmp --model_path models/CONTRIQUE_checkpoint25.tar --linear_regressor_path models/CSIQ_FR.save

Training CONTRIQUE

Download Training Data

Create a directory mkdir training_data to store images used for training CONTRIQUE.

  1. KADIS-700k : Download KADIS-700k dataset and execute the supllied codes to generate synthetically distorted images. Store this data in the training_data/kadis700k directory.
  2. AVA : Download AVA dataset and store in the training_data/UGC_images/AVA_Dataset directory.
  3. COCO : COCO dataset contains 330k images spread across multiple competitions. We used 4 folders training_data/UGC_images/test2015, training_data/UGC_images/train2017, training_data/UGC_images/val2017, training_data/UGC_images/unlabeled2017 for training.
  4. CERTH-Blur : Blur dataset images are stored in the training_data/UGC_images/blur_image directory.
  5. VOC : VOC images are stored in the training_data/UGC_images/VOC2012 directory.

Training Model

Download csv files containing path to images and corresponding distortion classes.

wget -L https://utexas.box.com/shared/static/124n9sfb27chgt59o8mpxl7tomgvn2lo.csv -O csv_files/file_names_ugc.csv -q --show-progress
wget -L https://utexas.box.com/shared/static/jh5cmu63347auyza37773as5o9zxctby.csv -O csv_files/file_names_syn.csv -q --show-progress

The above files can also be downloaded manually using these links link1, link2

For training with a single GPU the following command can be used

python3 train.py --batch_size 256 --lr 0.6 --epochs 25

Training with multiple GPUs using Distributed training (Recommended)

Run the following commands on different terminals concurrently

CUDA_VISIBLE_DEVICES=0 python3 train.py --nodes 4 --nr 0 --batch_size 64 --lr 0.6 --epochs 25
CUDA_VISIBLE_DEVICES=1 python3 train.py --nodes 4 --nr 1 --batch_size 64 --lr 0.6 --epochs 25
CUDA_VISIBLE_DEVICES=2 python3 train.py --nodes 4 --nr 2 --batch_size 64 --lr 0.6 --epochs 25
CUDA_VISIBLE_DEVICES=3 python3 train.py --nodes 4 --nr 3 --batch_size 64 --lr 0.6 --epochs 25

Note that in distributed training, batch_size value will be the number of images to be loaded on each GPU. During CONTRIQUE training equal number of images will be loaded from both synthetic and authentic distortions. Thus in the above example code, 128 images will be loaded on each GPU.

Training Linear Regressor

After CONTRIQUE model training is complete, a linear regressor is trained using CONTRIQUE features and corresponding ground truth quality scores using the following command.

python3 train_regressor.py --feat_path feat.npy --ground_truth_path scores.npy --alpha 0.1

Contact

Please contact Pavan ([email protected]) if you have any questions, suggestions or corrections to the above implementation.

Citation

@article{madhusudana2021st,
  title={Image Quality Assessment using Contrastive Learning},
  author={Madhusudana, Pavan C and Birkbeck, Neil and Wang, Yilin and Adsumilli, Balu and Bovik, Alan C},
  journal={arXiv:2110.13266},
  year={2021}
}
Owner
Pavan Chennagiri
PhD Student
Pavan Chennagiri
ConvMixer unofficial implementation

ConvMixer ConvMixer 非官方实现 pytorch 版本已经实现。 nets 是重构版本 ,test 是官方代码 感兴趣小伙伴可以对照看一下。 keras 已经实现 tf2.x 中 是tensorflow 2 版本 gelu 激活函数要求 tf=2.4 否则使用入下代码代替gelu

Jian Tengfei 8 Jul 11, 2022
YOLOX Win10 Project

Introduction 这是一个用于Windows训练YOLOX的项目,相比于官方项目,做了一些适配和修改: 1、解决了Windows下import yolox失败,No such file or directory: 'xxx.xml'等路径问题 2、CUDA out of memory等显存不

5 Jun 08, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
[ICCV'21] Pri3D: Can 3D Priors Help 2D Representation Learning?

Pri3D: Can 3D Priors Help 2D Representation Learning? [ICCV 2021] Pri3D leverages 3D priors for downstream 2D image understanding tasks: during pre-tr

Ji Hou 124 Jan 06, 2023
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
CodeContests is a competitive programming dataset for machine-learning

CodeContests CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training AlphaCode. It consists of pro

DeepMind 1.6k Jan 08, 2023
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
Generative Adversarial Networks for High Energy Physics extended to a multi-layer calorimeter simulation

CaloGAN Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks. This repository c

Deep Learning for HEP 101 Nov 13, 2022
[ICCV2021] Official Pytorch implementation for SDGZSL (Semantics Disentangling for Generalized Zero-Shot Learning)

Semantics Disentangling for Generalized Zero-shot Learning This is the official implementation for paper Zhi Chen, Yadan Luo, Ruihong Qiu, Zi Huang, J

25 Dec 06, 2022
Scalable implementation of Lee / Mykland (2012) and Ait-Sahalia / Jacod (2012) Jump tests for noisy high frequency data

JumpDetectR Name of QuantLet : JumpDetectR Published in : 'To be published as "Jump dynamics in high frequency crypto markets"' Description : 'Scala

LvB 12 Jan 01, 2023
上海交通大学全自动抢课脚本,支持准点开抢与抢课后持续捡漏两种模式。2021/06/08更新。

Welcome to Course-Bullying-in-SJTU-v3.1! 2021/6/8 紧急更新v3.1 更新说明 为了更好地保护用户隐私,将原来用户名+密码的登录方式改为微信扫二维码+cookie登录方式,不再需要配置使用pytesseract。在使用扫码登录模式时,请稍等,二维码将马

87 Sep 13, 2022
OneShot Learning-based hotword detection.

EfficientWord-Net Hotword detection based on one-shot learning Home assistants require special phrases called hotwords to get activated (eg:"ok google

ANT-BRaiN 102 Dec 25, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

967 Jan 04, 2023
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Deepak Nandwani 1 Jan 01, 2022
An official implementation of the Anchor DETR.

Anchor DETR: Query Design for Transformer-Based Detector Introduction This repository is an official implementation of the Anchor DETR. We encode the

MEGVII Research 276 Dec 28, 2022
Graph InfoClust: Leveraging cluster-level node information for unsupervised graph representation learning

Graph-InfoClust-GIC [PAKDD 2021] PAKDD'21 version Graph InfoClust: Maximizing Coarse-Grain Mutual Information in Graphs Preprint version Graph InfoClu

Costas Mavromatis 21 Dec 03, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023