Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

Overview

Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

reproducibility task

This is the repository for the paper Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigation, developed by Giacomo Medda, PhD student at University of Cagliari, with the support of Gianni Fenu, Full Professor at University of Cagliari, Mirko Marras, Non-tenure Track Assistant Professor at University of Cagliari, and Ludovico Boratto, Tenure Track Assistant Professor at University of Cagliari.

The goal of the paper was to find a common understanding and practical benchmarks on how and when each procedure of consumer fairness in recommender systems can be used in comparison to the others.

Repository Organization

  • reproducibility_study

    This is the directory that contains the source code of each reproduced paper identified by the author names of the respective paper.

    • Ashokan and Haas: Fairness metrics and bias mitigation strategies for rating predictions
    • Burke et al: Balanced Neighborhoods for Multi-sided Fairness in Recommendation
    • Ekstrand et al: All The Cool Kids, How Do They Fit In. Popularity and Demographic Biases in Recommender Evaluation and Effectiveness
    • Frisch et al: Co-clustering for fair recommendation
    • Kamishima et al: Recommendation Independence
    • Li et al: User-oriented Fairness in Recommendation
    • Rastegarpanah et al: Fighting Fire with Fire. Using Antidote Data to Improve Polarization and Fairness of Recommender Systems
    • Wu et al: Learning Fair Representations for Recommendation. A Graph-based Perspective
  • Preprocessing

    Contains the scripts to preprocess the raw datasets and to generate the input data for each reproduced paper.

  • Evaluation

    Contains the scripts to load the predictions of each reproduced paper, compute the metrics and generate plots and tables in latex and markdown forms.

  • Other Folders

    The other folders not already mentioned are part of the codebase that supports the scripts contained in Preprocessing and Evaluation. These directories and their contents are described by README_codebase, since the structure and code inside these folders is only used to support the reproducibility study and it is independent from the specific implementation of each paper.

Reproducibility Pipeline

  • Code Integration.

    The preprocessing of the raw datasets is performed by the scripts.

    The commands to preprocess each dataset are present at the top of the related dataset script, but the procedure is better described inside the REPRODUCE.md. The preprocessed datasets will be saved in data/preprocessed_datasets.

    Once the MovieLens 1M and the Last.FM 1K dataset have been processed, we can pass to the generation of the input data for each reproduced paper:

    The commands to generate the input data for each preprocessed dataset and sensitive attribute are present at the top of the script, but the procedure is better described inside the REPRODUCE.md). The generated input data will be saved in Preprocessing/input_data.

  • Mitigation Execution

    Each paper (folder) listed in the subsection reproducibility_study of Repository Organization contains a REPRODUCE.md file that describes everything to setup, prepare and run each reproduced paper. In particular, instructions to install the dependencies are provided, as well as the specific subfolders to fill with the input data generated in the previous step, in order to properly run the experiments of the selected paper. The procedure for each source code is better described in the already mentioned REPRODUCE.md file.

  • Relevance Estimation and Metrics Computation

    The REPRODUCE.md file contained in each "paper" folder describes also where the predictions can be found at the end of the mitigation procedure and guide the developer on following the instructions of the REPRODUCE.md of Evaluation that contains:

    • metrics_reproduced: script that loads all the predictions of relevance scores and computes the metrics in form of plots and latex tables This is the script that must be configured the most, since the paths of the specific predictions of each paper and model could be copied and pasted inside the script if the filenames do not correspond to what we expect and prepare. The REPRODUCE.MD already mentioned better described these steps and specifying which are the commands to execute to get the desired results.

Installation

Considering the codebase and the different versions of libraries used by each paper, multiple Python versions are mandatory to execute properly this code.

The codebase (that is the code not inside reproducibility_study, Preprocessing, Evaluation) needs a Python 3.8 installation and all the necessary dependencies can be installed with the requirements.txt file in the root of the repository with the following command in Windows:

pip install -r requirements.txt

or in Linux:

pip3 install -r requirements.txt

The installation of each reproducible paper is thoroughly described in the REPRODUCE.md that you can find in each paper folder, but every folder contains a requirements.txt file that you can use to install the dependencies in the same way. We recommend to use virtual environments at least for each reproduced paper, since some require specific versions of Python (2, 3, 3.7) and a virtual environment for each paper will maintain a good order in the code organization. Virtual environments can be created in different ways depending on the Python version and on the system. The Python Documentation describes the creation of virtual environments for Python >= 3.5, while the virtualenv Website can be used for Python 2.

Results

Top-N Recommendation Gender

Top-N Recommendation Gender

Top-N Recommendation Age

Top-N Recommendation Age

Rating Prediction Gender

Rating Prediction Gender

Rating Prediction Age

Rating Prediction Age

A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

DRSAN A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution Karam Park, Jae Woong Soh, and Nam Ik Cho Environments U

4 May 10, 2022
This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Mixture of Volumetric Primitives -- Training and Evaluation This repository contains code to train and render Mixture of Volumetric Primitives (MVP) m

Meta Research 125 Dec 29, 2022
Image-to-Image Translation in PyTorch

CycleGAN and pix2pix in PyTorch New: Please check out contrastive-unpaired-translation (CUT), our new unpaired image-to-image translation model that e

Jun-Yan Zhu 19k Jan 07, 2023
State of the Art Neural Networks for Deep Learning

pyradox This python library helps you with implementing various state of the art neural networks in a totally customizable fashion using Tensorflow 2

Ritvik Rastogi 60 May 29, 2022
The official repository for "Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds"

Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds The why Im

3 Mar 29, 2022
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.

CSL-YOLO: A New Lightweight Object Detection System for Edge Computing This project provides a SOTA level lightweight YOLO called "Cross-Stage Lightwe

Miles Zhang 54 Dec 21, 2022
SCNet: Learning Semantic Correspondence

SCNet Code Region matching code is contributed by Kai Han ([email protected]). Dense

Kai Han 34 Sep 06, 2022
Deep Reinforcement Learning for Keras.

Deep Reinforcement Learning for Keras What is it? keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seaml

Keras-RL 0 Dec 15, 2022
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation"

RegSeg The official implementation of "Rethink Dilated Convolution for Real-time Semantic Segmentation" Paper: arxiv D block Decoder Setup Install the

Roland 61 Dec 27, 2022
thundernet ncnn

MMDetection_Lite 基于mmdetection 实现一些轻量级检测模型,安装方式和mmdeteciton相同 voc0712 voc 0712训练 voc2007测试 coco预训练 thundernet_voc_shufflenetv2_1.5 input shape mAP 320

DayBreak 39 Dec 05, 2022
Object detection and instance segmentation toolkit based on PaddlePaddle.

Object detection and instance segmentation toolkit based on PaddlePaddle.

9.3k Jan 02, 2023
Unofficial PyTorch code for BasicVSR

Dependencies and Installation The code is based on BasicSR, Please install the BasicSR framework first. Pytorch=1.51 Training cd ./code CUDA_VISIBLE_

Long 59 Dec 06, 2022
Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21

MonoFlex Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21. Work in progress. Installation This repo is tested w

Yunpeng 169 Dec 06, 2022
Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" (RSS 2022)

Intro Official implementation of "Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe Quadruped Navigation" Robotics:Science and

Yunho Kim 21 Dec 07, 2022
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
⚓ Eurybia monitor model drift over time and securize model deployment with data validation

View Demo · Documentation · Medium article 🔍 Overview Eurybia is a Python library which aims to help in : Detecting data drift and model drift Valida

MAIF 172 Dec 27, 2022
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

551 Dec 29, 2022
PyTorch implementation of Self-supervised Contrastive Regularization for DG (SelfReg)

SelfReg PyTorch official implementation of Self-supervised Contrastive Regularization for Domain Generalization (SelfReg, https://arxiv.org/abs/2104.0

64 Dec 16, 2022