Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

Related tags

Deep LearningS2AND
Overview

S2AND

This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System for Author Name Disambiguation by Shivashankar Subramanian, Daniel King, Doug Downey, Sergey Feldman (https://arxiv.org/abs/2103.07534).

The reference model will be live on semanticscholar.org later this year, but the trained model is available now as part of the data download (see below).

Installation

To install this package, run the following:

git clone https://github.com/allenai/S2AND.git
cd S2AND
conda create -y --name s2and python==3.7
conda activate s2and
pip install -r requirements.in
pip install -e .

To obtain the training data, run this command after the package is installed (from inside the S2AND directory):
[Expected download size is: 50.4 GiB]

aws s3 sync --no-sign-request s3://ai2-s2-research-public/s2and-release data/

If you run into cryptic errors about GCC on macOS while installing the requirments, try this instead:

CFLAGS='-stdlib=libc++' pip install -r requirements.in

Configuration

Modify the config file at data/path_config.json. This file should look like this

{
    "main_data_dir": "absolute path to wherever you downloaded the data to",
    "internal_data_dir": "ignore this one unless you work at AI2"
}

As the dummy file says, main_data_dir should be set to the location of wherever you downloaded the data to, and internal_data_dir can be ignored, as it is used for some scripts that rely on unreleased data, internal to Semantic Scholar.

How to use S2AND for loading data and training a model

Once you have downloaded the datasets, you can go ahead and load up one of them:

from os.path import join
from s2and.data import ANDData

dataset_name = "pubmed"
parent_dir = "data/pubmed/
dataset = ANDData(
    signatures=join(parent_dir, f"{dataset_name}_signatures.json"),
    papers=join(parent_dir, f"{dataset_name}_papers.json"),
    mode="train",
    specter_embeddings=join(parent_dir, f"{dataset_name}_specter.pickle"),
    clusters=join(parent_dir, f"{dataset_name}_clusters.json"),
    block_type="s2",
    train_pairs_size=100000,
    val_pairs_size=10000,
    test_pairs_size=10000,
    name=dataset_name,
    n_jobs=8,
)

This may take a few minutes - there is a lot of text pre-processing to do.

The first step in the S2AND pipeline is to specify a featurizer and then train a binary classifier that tries to guess whether two signatures are referring to the same person.

We'll do hyperparameter selection with the validation set and then get the test area under ROC curve.

Here's how to do all that:

from s2and.model import PairwiseModeler
from s2and.featurizer import FeaturizationInfo
from s2and.eval import pairwise_eval

featurization_info = FeaturizationInfo()
# the cache will make it faster to train multiple times - it stores the features on disk for you
train, val, test = featurize(dataset, featurization_info, n_jobs=8, use_cache=True)
X_train, y_train = train
X_val, y_val = val
X_test, y_test = test

# calibration fits isotonic regression after the binary classifier is fit
# monotone constraints help the LightGBM classifier behave sensibly
pairwise_model = PairwiseModeler(
    n_iter=25, calibrate=True, monotone_constraints=featurization_info.lightgbm_monotone_constraints
)
# this does hyperparameter selection, which is why we need to pass in the validation set.
pairwise_model.fit(X_train, y_train, X_val, y_val)

# this will also dump a lot of useful plots (ROC, PR, SHAP) to the figs_path
pairwise_metrics = pairwise_eval(X_test, y_test, pairwise_model.classifier, figs_path='figs/', title='example')
print(pairwise_metrics)

The second stage in the S2AND pipeline is to tune hyperparameters for the clusterer on the validation data and then evaluate the full clustering pipeline on the test blocks.

We use agglomerative clustering as implemented in fastcluster with average linkage. There is only one hyperparameter to tune.

from s2and.model import Clusterer, FastCluster
from hyperopt import hp

clusterer = Clusterer(
    featurization_info,
    pairwise_model,
    cluster_model=FastCluster(linkage="average"),
    search_space={"eps": hp.uniform("eps", 0, 1)},
    n_iter=25,
    n_jobs=8,
)
clusterer.fit(dataset)

# the metrics_per_signature are there so we can break out the facets if needed
metrics, metrics_per_signature = cluster_eval(dataset, clusterer)
print(metrics)

For a fuller example, please see the transfer script: scripts/transfer_experiment.py.

How to use S2AND for predicting with a saved model

Assuming you have a clusterer already fit, you can dump the model to disk like so

import pickle

with open("saved_model.pkl", "wb") as _pkl_file:
    pickle.dump(clusterer, _pkl_file)

You can then reload it, load a new dataset, and run prediction

import pickle

with open("saved_model.pkl", "rb") as _pkl_file:
    clusterer = pickle.load(_pkl_file)

anddata = ANDData(
    signatures=signatures,
    papers=papers,
    specter_embeddings=paper_embeddings,
    name="your_name_here",
    mode="inference",
    block_type="s2",
)
pred_clusters, pred_distance_matrices = clusterer.predict(anddata.get_blocks(), anddata)

Our released models are in the s3 folder referenced above, and are called production_model.pickle and full_union_seed_*.pickle. They can be loaded the same way, except that the pickled object is a dictionary, with a clusterer key.

Incremental prediction

There is a also a predict_incremental function on the Clusterer, that allows prediction for just a small set of new signatures. When instantiating ANDData, you can pass in cluster_seeds, which will be used instead of model predictions for those signatures. If you call predict_incremental, the full distance matrix will not be created, and the new signatures will simply be assigned to the cluster they have the lowest average distance to, as long as it is below the model's eps, or separately reclustered with the other unassigned signatures, if not within eps of any existing cluster.

Reproducibility

The experiments in the paper were run with the python (3.7.9) package versions in paper_experiments_env.txt. You can install these packages exactly by running pip install pip==21.0.0 and then pip install -r paper_experiments_env.txt --use-feature=fast-deps --use-deprecated=legacy-resolver. Rerunning on the branch s2and_paper should produce the same numbers as in the paper (we will udpate here if this becomes not true).

Licensing

The code in this repo is released under the Apache 2.0 license (license included in the repo. The dataset is released under ODC-BY (included in S3 bucket with the data). We would also like to acknowledge that some of the affiliations data comes directly from the Microsoft Academic Graph (https://aka.ms/msracad).

Citation

@misc{subramanian2021s2and, title={S2AND: A Benchmark and Evaluation System for Author Name Disambiguation}, author={Shivashankar Subramanian and Daniel King and Doug Downey and Sergey Feldman}, year={2021}, eprint={2103.07534}, archivePrefix={arXiv}, primaryClass={cs.DL} }

Comments
  • Find some wrong labels in dataset?

    Find some wrong labels in dataset?

    For example, in Pubmed dataset, in "clusters.json" file, There is a cluster “PM_352”: ['18834', '18835', '18836', '18837', '18838', '18839', '18840', '18841']. But I checked from "signatures.json", since '18834' in given_block "z zhang" while '18836' is in given_block "d zhang", how could they be in a same cluster? Is anything I misunderstand?

    opened by hapoyige 15
  • Add extra name incompatibility check

    Add extra name incompatibility check

    This PR attempts prevent new name incompatibilities from being added to a cluster. So if a claimed cluster contains S Govender and Sharlene Govender, s2and might break that claimed cluster up into two, and then attach Suendharan Govender to the S Govender piece, and then we we remerge, we have a cluster with S Govender, Sharlene Govender, and Suendharan Govender. I suspect this is the issue behind https://github.com/allenai/scholar/issues/27801#issuecomment-847397953, but did not verify that.

    opened by dakinggg 5
  • Question: Can predictions run in multi-core

    Question: Can predictions run in multi-core

    I see that the current implementation of the prediction using the production model will run on a single-core which is very slow when working with larger datasets. I was wondering if there is some already explored way of doing this using multiple cores if not a GPU?

    opened by jinamshah 4
  • global_dataset trick not working?

    global_dataset trick not working?

    @dakinggg I've got a branch going to make S2AND work for paper deduplication. I haven't really messed with your global_dataset trick (I think), but now it stopped working if n_jobs > 1. Works fine for when it's in serial.

    Test fails with FAILED tests/test_featurizer.py::TestData::test_featurizer - NameError: name 'global_dataset' is not defined

    Did you run into this when making it work originally? Any ideas?

    opened by sergeyf 2
  • Question: How does one go about converting their own dataset to the one used for training

    Question: How does one go about converting their own dataset to the one used for training

    Hello, I understand that this is not technically an issue but I just want to understand how to convert a dataset of my own ( that has information like the research paper name, the authors' details like name,affiliation, email id etc) to a dataset that can be consumed for training from scratch.

    opened by jinamshah 2
  • No cluster.json in the medline dataset.

    No cluster.json in the medline dataset.

    I found that medline dataset does not contain "medline_cluster.json" file which prevents me to reproduce the results. Please add the cluster.json file to S2AND.

    opened by skojaku 1
  • Link to evaluation dataset

    Link to evaluation dataset

    Thank you for this excellent open AND-algorithm and data!

    I followed the link from the paper to this repository, but I was not able to find the S2AND dataset. Could you add some help to the readme, please?

    opened by tomthe 1
  • Update readme.md

    Update readme.md

    I added intro language and it includes reference to the saved models. Are these uploaded already? If so, can you add a commit somewhere in the readme about it and maybe a short example about how to load them?

    opened by sergeyf 1
  • Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Thanks for the reply! Actually, I am using the dataset to do a clustering task so the divided block and cluster labels matter. Here comes another confusion for me, which is, I thought "block" came from original source data and "given_block" is the modified version of S2AND since the number of statistics matches the #Block in TableⅡ in the S2AND paper. Any suggestions?

    Originally posted by @hapoyige in https://github.com/allenai/S2AND/issues/25#issuecomment-1046418074

    opened by hapoyige 0
  • Be more explicit about use_cache to avoid

    Be more explicit about use_cache to avoid

    Zhipeng and the SPECTER+ team missed the cache specification and were debugging for a long time. These changes should hopefully make the cache easier to understand and notice.

    opened by sergeyf 0
  • Incremental bug

    Incremental bug

    Fixes an issue with the incremental code clustering where we were not splitting claimed profiles properly to align with the expected s2and output. The result was the incompatible clusters resulting from claims remained incompatible, and new mentions could not be assigned to them.

    opened by dakinggg 0
  • Future improvements

    Future improvements

    • [ ] Unify the set of languages between cld2 and fasttext (see unify_lang branch for a start)
    • [ ] Audit the list of name pairs (noticed (maria, mary), (kathleen, katherine))
    • [ ] Generally improve language detection on titles (would require a whole model)
    • [ ] if a person has two very disjoint "personas", they will end up as two clusters. Probably not resolvable, but putting here anyway
    • [ ] somehow do better with low information papers (e.g. no abstract, venue, affiliation, references)
    opened by dakinggg 0
Releases(v1.1_no_refs)
A parametric soroban written with CADQuery.

A parametric soroban written in CADQuery The purpose of this project is to demonstrate how "code CAD" can be intuitive to learn. See soroban.py for a

Lee 4 Aug 13, 2022
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022
Forecasting with Gradient Boosted Time Series Decomposition

ThymeBoost ThymeBoost combines time series decomposition with gradient boosting to provide a flexible mix-and-match time series framework for spicy fo

131 Jan 08, 2023
Official Pytorch implementation of MixMo framework

MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks Official PyTorch implementation of the MixMo framework | paper | docs Alexandr

79 Nov 07, 2022
Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs at the moment, Cycles and Arnold supported

GafferHaven Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs are supported at the moment, in Cycles and Arnold lights.

Jakub Vondra 6 Jan 26, 2022
SLAMP: Stochastic Latent Appearance and Motion Prediction

SLAMP: Stochastic Latent Appearance and Motion Prediction Official implementation of the paper SLAMP: Stochastic Latent Appearance and Motion Predicti

Kaan Akan 34 Dec 08, 2022
Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.

Official PyTorch implementation of the paper: "Self-Supervised Relational Reasoning for Representation Learning" (2020), Patacchiola, M., and Storkey,

Massimiliano Patacchiola 135 Jan 03, 2023
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 01, 2023
Extension to fastai for volumetric medical data

FAIMED 3D use fastai to quickly train fully three-dimensional models on radiological data Classification from faimed3d.all import * Load data in vari

Keno 26 Aug 22, 2022
NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions

NeoDTI NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions (Bioinformatics).

62 Nov 26, 2022
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

Jittor: a Just-in-time(JIT) deep learning framework Quickstart | Install | Tutorial | Chinese Jittor is a high-performance deep learning framework bas

2.7k Jan 03, 2023
Generating Anime Images by Implementing Deep Convolutional Generative Adversarial Networks paper

AnimeGAN - Deep Convolutional Generative Adverserial Network PyTorch implementation of DCGAN introduced in the paper: Unsupervised Representation Lear

Rohit Kukreja 23 Jul 21, 2022
Bayesian Image Reconstruction using Deep Generative Models

Bayesian Image Reconstruction using Deep Generative Models R. Marinescu, D. Moyer, P. Golland For technical inquiries, please create a Github issue. F

Razvan Valentin Marinescu 51 Nov 23, 2022
Inferred Model-based Fuzzer

IMF: Inferred Model-based Fuzzer IMF is a kernel API fuzzer that leverages an automated API model inferrence techinque proposed in our paper at CCS. I

SoftSec Lab 104 Sep 28, 2022
Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

1 Jan 16, 2022
Brain tumor detection using Convolution-Neural Network (CNN)

Detect and Classify Brain Tumor using CNN. A system performing detection and classification by using Deep Learning Algorithms using Convolution-Neural Network (CNN).

assia 1 Feb 07, 2022
A lightweight library to compare different PyTorch implementations of the same network architecture.

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compar

Arjun Krishnakumar 5 Jan 02, 2023
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022