PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric

Overview

PyTorch implementation of MSBG hearing loss model and MBSTOI intelligibility metric

This repository contains the implementation of MSBG hearing loss model and MBSTOI intellibility metric in PyTorch. The models are differentiable and can be used as a loss function to train a neural network. Both models follow Python implementation of MSBG and MBSTOI provided by organizers of Clarity Enhancement challenge. Please check the implementation at Clarity challenge repository for more information about the models.

Please note that the differentiable models are approximations of the original models and are intended to be used to train neural networks, not to give exactly the same outputs as the original models.

Requirements and installation

The model uses parts of the functionality of the original MSBG and MBSTOI models. First, download the Clarity challenge repository and set its location as CLARITY_ROOT. To install the necessary requirements:

pip install -r requirements.txt
pushd .
cd $CLARITY_ROOT/projects/MSBG/packages/matlab_mldivide
python setup.py install
popd

Additionally, set paths to the Clarity repository and this repository in path.sh and run the path.sh script before using the provided modules.

. path.sh

Tests and example script

Directory tests contains scipts to test the correspondance of the differentiable modules compared to their original implementation. To run the tests, you need the Clarity data, which can be obtained from the Clarity challenge repository. Please set the paths to the data in the scripts.

MSBG test

The tests of the hearing loss compare the outputs of functions provided by the original implementation and the differentiable version. The output shows the mean differences of the output signals

Test measure_rms, mean difference 9.629646580133766e-09
Test src_to_cochlea_filt forward, mean difference 9.830486283616455e-16
Test src_to_cochlea_filt backward, mean difference 6.900756131702976e-15
Test smear, mean difference 0.00019685214410863303
Test gammatone_filterbank, mean difference 5.49958965492409e-07
Test compute_envelope, mean difference 4.379759604381869e-06
Test recruitment, mean difference 3.1055169855373764e-12
Test cochlea, mean difference 2.5698933453410134e-06
Test hearing_loss, mean difference 2.2326804706160673e-06

MBSTOI test

The test of the intelligbility metric compares the MBSTOI values obtained by the original and differentiable model over the development set of Clarity challenge. The following graph shows the comparison. Correspondance of MBSTOI metrics.

Example script

The script example.py shows how to use the provided module as a loss function for training the neural network. In the script, we use a simple small model and overfit on one example. The descreasing loss function confirms that the provided modules are differentiable.

Loss function with MSBG and MBSTOI loss

Citation

If you use this work, please cite:

@inproceedings{Zmolikova2021BUT,
  author    = {Zmolikova, Katerina and \v{C}ernock\'{y}, Jan "Honza"},
  title     = {{BUT system for the first Clarity enhancement challenge}},
  year      = {2021},
  booktitle = {The Clarity Workshop on Machine Learning Challenges for Hearing Aids (Clarity-2021)},
}
Owner
BUT <a href=[email protected]">
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
HandFoldingNet ✌️ : A 3D Hand Pose Estimation Network Using Multiscale-Feature Guided Folding of a 2D Hand Skeleton

HandFoldingNet ✌️ : A 3D Hand Pose Estimation Network Using Multiscale-Feature Guided Folding of a 2D Hand Skeleton Wencan Cheng, Jae Hyun Park, Jong

cwc1260 23 Oct 21, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
RGB-D Local Implicit Function for Depth Completion of Transparent Objects

RGB-D Local Implicit Function for Depth Completion of Transparent Objects [Project Page] [Paper] Overview This repository maintains the official imple

NVIDIA Research Projects 43 Dec 12, 2022
Qlib is an AI-oriented quantitative investment platform

Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.

Microsoft 10.1k Dec 30, 2022
For medical image segmentation

LeViT_UNet For medical image segmentation Our model is based on LeViT (https://github.com/facebookresearch/LeViT). You'd better gitclone its codes. Th

13 Dec 24, 2022
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
Sentiment analysis translations of the Bhagavad Gita

Sentiment and Semantic Analysis of Bhagavad Gita Translations It is well known that translations of songs and poems not only breaks rhythm and rhyming

Machine learning and Bayesian inference @ UNSW Sydney 3 Aug 01, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
Code to generate datasets used in "How Useful is Self-Supervised Pretraining for Visual Tasks?"

Synthetic dataset rendering Framework for producing the synthetic datasets used in: How Useful is Self-Supervised Pretraining for Visual Tasks? Alejan

Princeton Vision & Learning Lab 21 Apr 29, 2022
Keras-1D-ACGAN-Data-Augmentation

Keras-1D-ACGAN-Data-Augmentation What is the ACGAN(Auxiliary Classifier GANs) ? Related Paper : [Abstract : Synthesizing high resolution photorealisti

Jae-Hoon Shim 7 Dec 23, 2022
Back to Event Basics: SSL of Image Reconstruction for Event Cameras

Back to Event Basics: SSL of Image Reconstruction for Event Cameras Minimal code for Back to Event Basics: Self-Supervised Learning of Image Reconstru

TU Delft 42 Dec 26, 2022
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
Contextual Attention Network: Transformer Meets U-Net

Contextual Attention Network: Transformer Meets U-Net Contexual attention network for medical image segmentation with state of the art results on skin

Reza Azad 67 Nov 28, 2022
'A C2C E-COMMERCE TRUST MODEL BASED ON REPUTATION' Python implementation

Project description A library providing functionalities to calculate reputation and degree of trust on C2C ecommerce platforms. The work is fully base

Davide Bigotti 2 Dec 14, 2022
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 01, 2023
Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation'

OD-Rec Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation' Paper, saved teacher models and Andro

Xin Xia 11 Nov 22, 2022
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
Code base for the paper "Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation"

This repository contains code for the paper Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiati

8 Aug 28, 2022