QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Overview

logo

GitHub last commit

Introduction

QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and newly state-of-the-art recommendation models are implemented. QRec has a lightweight architecture and provides user-friendly interfaces. It can facilitate model implementation and evaluation.
Founder and principal contributor: @Coder-Yu
Other contributors: @DouTong @Niki666 @HuXiLiFeng @BigPowerZ @flyxu
Supported by: @AIhongzhi (A/Prof. Hongzhi Yin, UQ), @mingaoo (A/Prof. Min Gao, CQU)

What's New

12/10/2021 - BUIR proposed in SIGIR'21 paper has been added.
30/07/2021 - We have transplanted QRec from py2 to py3.
07/06/2021 - SEPT proposed in our KDD'21 paper has been added.
16/05/2021 - SGL proposed in SIGIR'21 paper has been added.
16/01/2021 - MHCN proposed in our WWW'21 paper has been added.
22/09/2020 - DiffNet proposed in SIGIR'19 has been added.
19/09/2020 - DHCF proposed in KDD'20 has been added.
29/07/2020 - ESRF proposed in my TKDE paper has been added.
23/07/2020 - LightGCN proposed in SIGIR'20 has been added.
17/09/2019 - NGCF proposed in SIGIR'19 has been added.
13/08/2019 - RSGAN proposed in ICDM'19 has been added.
09/08/2019 - Our paper is accepted as full research paper by ICDM'19.
20/02/2019 - IRGAN proposed in SIGIR'17 has been added.
12/02/2019 - CFGAN proposed in CIKM'18 has been added.

Architecture

QRec Architecture

Workflow

QRec Architecture

Features

  • Cross-platform: QRec can be easily deployed and executed in any platforms, including MS Windows, Linux and Mac OS.
  • Fast execution: QRec is based on Numpy, Tensorflow and some lightweight structures, which make it run fast.
  • Easy configuration: QRec configs recommenders with a configuration file and provides multiple evaluation protocols.
  • Easy expansion: QRec provides a set of well-designed recommendation interfaces by which new algorithms can be easily implemented.

Requirements

  • gensim==4.1.2
  • joblib==1.1.0
  • mkl==2022.0.0
  • mkl_service==2.4.0
  • networkx==2.6.2
  • numba==0.53.1
  • numpy==1.20.3
  • scipy==1.6.2
  • tensorflow==1.14.0

Usage

There are two ways to run the recommendation models in QRec:

  • 1.Configure the xx.conf file in the directory named config. (xx is the name of the model you want to run)
  • 2.Run main.py.

Or

  • Follow the codes in snippet.py.

For more details, we refer you to the handbook of QRec.

Configuration

Essential Options

Entry Example Description
ratings D:/MovieLens/100K.txt Set the file path of the dataset. Format: each row separated by empty, tab or comma symbol.
social D:/MovieLens/trusts.txt Set the file path of the social dataset. Format: each row separated by empty, tab or comma symbol.
ratings.setup -columns 0 1 2 -columns: (user, item, rating) columns of rating data are used.
social.setup -columns 0 1 2 -columns: (trustor, trustee, weight) columns of social data are used.
mode.name UserKNN/ItemKNN/SlopeOne/etc. name of the recommendation model.
evaluation.setup -testSet ../dataset/testset.txt Main option: -testSet, -ap, -cv (choose one of them)
-testSet path/to/test/file (need to specify the test set manually)
-ap ratio (ap means that the ratings are automatically partitioned into training set and test set, the number is the ratio of the test set. e.g. -ap 0.2)
-cv k (-cv means cross validation, k is the number of the fold. e.g. -cv 5)
-predict path/to/user list/file (predict for a given list of users without evaluation; need to mannually specify the user list file (each line presents a user))
Secondary option:-b, -p, -cold, -tf, -val (multiple choices)
-val ratio (model test would be conducted on the validation set which is generated by randomly sampling the training dataset with the given ratio.)
-b thres (binarizing the rating values. Ratings equal or greater than thres will be changed into 1, and ratings lower than thres will be left out. e.g. -b 3.0)
-p (if this option is added, the cross validation wll be executed parallelly, otherwise executed one by one)
-tf (model training will be conducted on TensorFlow (only applicable and needed for shallow models))
-cold thres (evaluation on cold-start users; users in the training set with rated items more than thres will be removed from the test set)
item.ranking off -topN -1 Main option: whether to do item ranking
-topN N1,N2,N3...: the length of the recommendation list. *QRec can generate multiple evaluation results for different N at the same time
output.setup on -dir ./Results/ Main option: whether to output recommendation results
-dir path: the directory path of output results.

Memory-based Options

similarity pcc/cos Set the similarity method to use. Options: PCC, COS;
num.neighbors 30 Set the number of neighbors used for KNN-based algorithms such as UserKNN, ItemKNN.

Model-based Options

num.factors 5/10/20/number Set the number of latent factors
num.max.epoch 100/200/number Set the maximum number of epoch for iterative recommendation algorithms.
learnRate -init 0.01 -max 1 -init initial learning rate for iterative recommendation algorithms;
-max: maximum learning rate (default 1);
reg.lambda -u 0.05 -i 0.05 -b 0.1 -s 0.1 -u: user regularizaiton; -i: item regularization; -b: bias regularizaiton; -s: social regularization

Implement Your Model

  • 1.Make your new algorithm generalize the proper base class.
  • 2.Reimplement some of the following functions as needed.
          - readConfiguration()
          - printAlgorConfig()
          - initModel()
          - trainModel()
          - saveModel()
          - loadModel()
          - predictForRanking()
          - predict()

For more details, we refer you to the handbook of QRec.

Implemented Algorithms

       
Rating prediction Paper
SlopeOne Lemire and Maclachlan, Slope One Predictors for Online Rating-Based Collaborative Filtering, SDM'05.
PMF Salakhutdinov and Mnih, Probabilistic Matrix Factorization, NIPS'08.
SoRec Ma et al., SoRec: Social Recommendation Using Probabilistic Matrix Factorization, SIGIR'08.
SVD++ Koren, Factorization meets the neighborhood: a multifaceted collaborative filtering model, SIGKDD'08.
RSTE Ma et al., Learning to Recommend with Social Trust Ensemble, SIGIR'09.
SVD Y. Koren, Collaborative Filtering with Temporal Dynamics, SIGKDD'09.
SocialMF Jamali and Ester, A Matrix Factorization Technique with Trust Propagation for Recommendation in Social Networks, RecSys'10.
EE Khoshneshin et al., Collaborative Filtering via Euclidean Embedding, RecSys'10.
SoReg Ma et al., Recommender systems with social regularization, WSDM'11.
LOCABAL Tang, Jiliang, et al. Exploiting local and global social context for recommendation, AAAI'13.
SREE Li et al., Social Recommendation Using Euclidean embedding, IJCNN'17.
CUNE-MF Zhang et al., Collaborative User Network Embedding for Social Recommender Systems, SDM'17.

                       
Item Ranking Paper
BPR Rendle et al., BPR: Bayesian Personalized Ranking from Implicit Feedback, UAI'09.
WRMF Yifan Hu et al.Collaborative Filtering for Implicit Feedback Datasets, KDD'09.
SBPR Zhao et al., Leveraing Social Connections to Improve Personalized Ranking for Collaborative Filtering, CIKM'14
ExpoMF Liang et al., Modeling User Exposure in Recommendation, WWW''16.
CoFactor Liang et al., Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence, RecSys'16.
TBPR Wang et al. Social Recommendation with Strong and Weak Ties, CIKM'16'.
CDAE Wu et al., Collaborative Denoising Auto-Encoders for Top-N Recommender Systems, WSDM'16'.
DMF Xue et al., Deep Matrix Factorization Models for Recommender Systems, IJCAI'17'.
NeuMF He et al. Neural Collaborative Filtering, WWW'17.
CUNE-BPR Zhang et al., Collaborative User Network Embedding for Social Recommender Systems, SDM'17'.
IRGAN Wang et al., IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models, SIGIR'17'.
SERec Wang et al., Collaborative Filtering with Social Exposure: A Modular Approach to Social Recommendation, AAAI'18'.
APR He et al., Adversarial Personalized Ranking for Recommendation, SIGIR'18'.
IF-BPR Yu et al. Adaptive Implicit Friends Identification over Heterogeneous Network for Social Recommendation, CIKM'18'.
CFGAN Chae et al. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks, CIKM'18.
NGCF Wang et al. Neural Graph Collaborative Filtering, SIGIR'19'.
DiffNet Wu et al. A Neural Influence Diffusion Model for Social Recommendation, SIGIR'19'.
RSGAN Yu et al. Generating Reliable Friends via Adversarial Learning to Improve Social Recommendation, ICDM'19'.
LightGCN He et al. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation, SIGIR'20.
DHCF Ji et al. Dual Channel Hypergraph Collaborative Filtering, KDD'20.
ESRF Yu et al. Enhancing Social Recommendation with Adversarial Graph Convlutional Networks, TKDE'20.
MHCN Yu et al. Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation, WWW'21.
SGL Wu et al. Self-supervised Graph Learning for Recommendation, SIGIR'21.
SEPT Yu et al. Socially-Aware Self-supervised Tri-Training for Recommendation, KDD'21.
BUIR Lee et al. Bootstrapping User and Item Representations for One-Class Collaborative Filtering, SIGIR'21.

Related Datasets

   
Data Set Basic Meta User Context
Users Items Ratings (Scale) Density Users Links (Type)
Ciao [1] 7,375 105,114 284,086 [1, 5] 0.0365% 7,375 111,781 Trust
Epinions [2] 40,163 139,738 664,824 [1, 5] 0.0118% 49,289 487,183 Trust
Douban [3] 2,848 39,586 894,887 [1, 5] 0.794% 2,848 35,770 Trust
LastFM [4] 1,892 17,632 92,834 implicit 0.27% 1,892 25,434 Trust
Yelp [5] 19,539 21,266 450,884 implicit 0.11% 19,539 864,157 Trust
Amazon-Book [6] 52,463 91,599 2,984,108 implicit 0.11% - - -

Reference

[1]. Tang, J., Gao, H., Liu, H.: mtrust:discerning multi-faceted trust in a connected world. In: International Conference on Web Search and Web Data Mining, WSDM 2012, Seattle, Wa, Usa, February. pp. 93–102 (2012)

[2]. Massa, P., Avesani, P.: Trust-aware recommender systems. In: Proceedings of the 2007 ACM conference on Recommender systems. pp. 17–24. ACM (2007)

[3]. G. Zhao, X. Qian, and X. Xie, “User-service rating prediction by exploring social users’ rating behaviors,” IEEE Transactions on Multimedia, vol. 18, no. 3, pp. 496–506, 2016.

[4]. Iván Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2011. 2nd Workshop on Information Heterogeneity and Fusion in Recom- mender Systems (HetRec 2011). In Proceedings of the 5th ACM conference on Recommender systems (RecSys 2011). ACM, New York, NY, USA

[5]. Yu et al. Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation, WWW'21.

[6]. He et al. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation, SIGIR'20.

Acknowledgment

This project is supported by the Responsible Big Data Intelligence Lab (RBDI) at the school of ITEE, University of Queensland, and Chongqing University.

If our project is helpful to you, please cite one of these papers.
@inproceedings{yu2018adaptive,
title={Adaptive implicit friends identification over heterogeneous network for social recommendation},
author={Yu, Junliang and Gao, Min and Li, Jundong and Yin, Hongzhi and Liu, Huan},
booktitle={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
pages={357--366},
year={2018},
organization={ACM}
}

@inproceedings{yu2021self,
title={Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation},
author={Yu, Junliang and Yin, Hongzhi and Li, Jundong and Wang, Qinyong and Hung, Nguyen Quoc Viet and Zhang, Xiangliang},
booktitle={Proceedings of the Web Conference 2021},
pages={413--424},
year={2021}
}

Owner
Yu
Long live idealism!
Yu
Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

CrossTeaching-SSOD 0. Introduction Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection" This repo include

Bruno Ma 9 Nov 29, 2022
Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Fight Detection from Still Images in the Wild Detecting fights from still images is an important task required to limit the distribution of social med

Şeymanur Aktı 10 Nov 09, 2022
Addition of pseudotorsion caclulation eta, theta, eta', and theta' to barnaba package

Addition to Original Barnaba Code: This is modified version of Barnaba package to calculate RNA pseudotorsion angles eta, theta, eta', and theta'. Ple

Mandar Kulkarni 1 Jan 11, 2022
Official implementation of the paper Momentum Capsule Networks (MoCapsNet)

Momentum Capsule Network Official implementation of the paper Momentum Capsule Networks (MoCapsNet). Abstract Capsule networks are a class of neural n

8 Oct 20, 2022
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

Kexin Huang 49 Oct 15, 2022
Official tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”

Tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”.

3.7k Dec 31, 2022
Face-Recognition-Attendence-System - This face recognition Attendence system using Python

Face-Recognition-Attendence-System I have developed this face recognition Attend

Riya Gupta 4 May 10, 2022
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021
TensorFlow CNN for fast style transfer

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! It takes 100ms on a 2015 Titan X to style t

1 Dec 14, 2021
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Label Studio is a multi-type data labeling and annotation tool with standardized output format

Website • Docs • Twitter • Join Slack Community What is Label Studio? Label Studio is an open source data labeling tool. It lets you label data types

Heartex 11.7k Jan 09, 2023
PixelPyramids: Exact Inference Models from Lossless Image Pyramids (ICCV 2021)

PixelPyramids: Exact Inference Models from Lossless Image Pyramids This repository contains the PyTorch implementation of the paper PixelPyramids: Exa

Visual Inference Lab @TU Darmstadt 8 Dec 11, 2022
A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

SGCN ⠀ A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018). Abstract Due to the fact much of today's data can be represented as

Benedek Rozemberczki 251 Nov 30, 2022
FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI

FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI 声明: 本项目仅限于学习交流,不可用于非法用途,包括但不限于:用于游戏外挂等,使用本项目产生的任何后果与本人无关! 简介 本项目基于yolov5,实现了一款FPS类游戏(CF、CSGO等)的自瞄AI,本项目旨在使用现

Fabian 246 Dec 28, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
Dyalog-apl-docset - Dyalog APL Dash Docset Generator

Dyalog APL Dash Docset Generator o alasa e kili sona kepeken tenpo lili a A Dash

Maciej Goszczycki 1 Jan 10, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
Banglore House Prediction Using Flask Server (Python)

Banglore House Prediction Using Flask Server (Python) 🌐 Links 🌐 📂 Repo In this repository, I've implemented a Machine Learning-based Bangalore Hous

Dhyan Shah 1 Jan 24, 2022
Code for CVPR2019 paper《Unequal Training for Deep Face Recognition with Long Tailed Noisy Data》

Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data. This is the code of CVPR 2019 paper《Unequal Training for Deep Face Recognition

Zhong Yaoyao 68 Jan 07, 2023
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021