An open source implementation of CLIP.

Overview

OpenCLIP

Welcome to an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training).

The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift. Our starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset. Specifically, a ResNet-50 model trained with our codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet. OpenAI's CLIP model reaches 31.3% when trained on the same subset of YFCC. For ease of experimentation, we also provide code for training on the 3 million images in the Conceptual Captions dataset, where a ResNet-50x4 trained with our codebase reaches 22.2% top-1 ImageNet accuracy.

As we describe in more detail below, CLIP models in a medium accuracy regime already allow us to draw conclusions about the robustness of larger CLIP models since the models follow reliable scaling laws.

This codebase is work in progress, and we invite all to contribute in making it more acessible and useful. In the future, we plan to add support for TPU training and release larger models. We hope this codebase facilitates and promotes further research in contrastive image-text learning.

Note that src/clip is a copy of OpenAI's official repository with minimal changes.

Data

Conceptual Captions

OpenCLIP reads a CSV file with two columns: a path to an image, and a text caption. The names of the columns are passed as an argument to main.py.

The script src/data/gather_cc.py will collect the Conceptual Captions images. First, download the Conceptual Captions URLs and then run the script from our repository:

python3 src/data/gather_cc.py path/to/Train_GCC-training.tsv path/to/Validation_GCC-1.1.0-Validation.tsv

Our training set contains 2.89M images, and our validation set contains 13K images.

YFCC and other datasets

In addition to specifying the training data via CSV files as mentioned above, our codebase also supports webdataset, which is recommended for larger scale datasets. The expected format is a series of .tar files. Each of these .tar files should contain two files for each training example, one for the image and one for the corresponding text. Both files should have the same name but different extensions. For instance, shard_001.tar could contain files such as abc.jpg and abc.txt. You can learn more about webdataset at https://github.com/webdataset/webdataset. We use .tar files with 1,000 data points each, which we create using tarp.

You can download the YFCC dataset from Multimedia Commons. Similar to OpenAI, we used a subset of YFCC to reach the aforementioned accuracy numbers. The indices of images in this subset are in OpenAI's CLIP repository.

Training CLIP

Install dependencies

conda env create -f environment.yml
source activate open_clip

Add directory to pythonpath:

cd open_clip
export PYTHONPATH="$PYTHONPATH:$PWD/src"

Sample running code:

nohup python -u src/training/main.py \
    --save-frequency 1 \
    --zeroshot-frequency 1 \
    --report-to tensorboard \
    --train-data="/path/to/train_data.csv"  \
    --val-data="/path/to/validation_data.csv"  \
    --csv-img-key filepath \
    --csv-caption-key title \
    --imagenet-val=/path/to/imagenet/root/val/ \
    --warmup 10000 \
    --batch-size=128 \
    --lr=1e-3 \
    --wd=0.1 \
    --epochs=30 \
    --workers=8 \
    --model RN50

Note: imagenet-val is the path to the validation set of ImageNet for zero-shot evaluation, not the training set! You can remove this argument if you do not want to perform zero-shot evaluation on ImageNet throughout training. Note that the val folder should contain subfolders. If it doest not, please use this script.

When run on a machine with 8 GPUs the command should produce the following training curve for Conceptual Captions:

CLIP zero shot training curve

More detailed curves for Conceptual Captions are given at /docs/clip_conceptual_captions.md.

When training a RN50 on YFCC the same hyperparameters as above are used, with the exception of lr=5e-4 and epochs=32.

Note that to use another model, like ViT-B/32 or RN50x4 or RN50x16 or ViT-B/16, specify with --model RN50x4.

Launch tensorboard:

tensorboard --logdir=logs/tensorboard/ --port=7777

Sample resuming from a checkpoint:

python src/training/main.py \
    --train-data="/path/to/train_data.csv" \
    --val-data="/path/to/validation_data.csv"  \
    --resume /path/to/checkpoints/epoch_K.pt

Sample evaluation only:

python src/training/main.py \
    --val-data="/path/to/validation_data.csv"  \
    --resume /path/to/checkpoints/epoch_K.pt

Trained models

You can find our ResNet-50 trained on YFCC-15M here.

Scaling trends

The plot below shows how zero-shot performance of CLIP models varies as we scale the number of samples used for training. Zero-shot performance increases steadily for both ImageNet and ImageNetV2, and is far from saturated at ~15M samples.

Why are low-accuracy CLIP models interesting?

TL;DR: CLIP models have high effective robustness, even at small scales.

CLIP models are particularly intriguing because they are more robust to natural distribution shifts (see Section 3.3 in the CLIP paper). This phenomena is illustrated by the figure below, with ImageNet accuracy on the x-axis and ImageNetV2 (a reproduction of the ImageNet validation set with distribution shift) accuracy on the y-axis. Standard training denotes training on the ImageNet train set and the CLIP zero-shot models are shown as stars.

CLIP scatter plot

As observed by Taori et al., 2020 and Miller et al., 2021, the in-distribution and out-of-distribution accuracies of models trained on ImageNet follow a predictable linear trend (the red line in the above plot). Effective robustness quantifies robustness as accuracy beyond this baseline, i.e., how far a model lies above the red line. Ideally a model would not suffer from distribution shift and fall on the y = x line (trained human labelers are within a percentage point of the y = x line).

Even though the CLIP models trained with this codebase achieve much lower accuracy than those trained by OpenAI, our models still lie on the same trend of improved effective robustness (the purple line). Therefore, we can study what makes CLIP robust without requiring industrial-scale compute.

For more more information on effective robustness, please see:

The Team

We are a group of researchers at UW, Google, Stanford, Amazon, Columbia, and Berkeley.

Gabriel Ilharco*, Mitchell Wortsman*, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, Ludwig Schmidt

Special thanks to Jong Wook Kim and Alec Radford for help with reproducing CLIP!

Citing

If you found this repository useful, please consider citing:

@software{ilharco_gabriel_2021_5143773,
  author       = {Ilharco, Gabriel and
                  Wortsman, Mitchell and
                  Carlini, Nicholas and
                  Taori, Rohan and
                  Dave, Achal and
                  Shankar, Vaishaal and
                  Namkoong, Hongseok and
                  Miller, John and
                  Hajishirzi, Hannaneh and
                  Farhadi, Ali and
                  Schmidt, Ludwig},
  title        = {OpenCLIP},
  month        = jul,
  year         = 2021,
  note         = {If you use this software, please cite it as below.},
  publisher    = {Zenodo},
  version      = {0.1},
  doi          = {10.5281/zenodo.5143773},
  url          = {https://doi.org/10.5281/zenodo.5143773}
}
@inproceedings{Radford2021LearningTV,
  title={Learning Transferable Visual Models From Natural Language Supervision},
  author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
  booktitle={ICML},
  year={2021}
}

DOI

Comments
  • adding CoCa

    adding CoCa

    The PR idea is to add the CoCa model as implemented in https://github.com/lucidrains/CoCa-pytorch, using existing parts as much as possible.

    Ideally adding possibilty to choose between custom and non custom Attention implementation as is done for CLIP.

    opened by gpucce 76
  • Inference testing using random data

    Inference testing using random data

    I have started to work on integration tests for with random data. This test runs on all pretrained models at fp32 with JIT True/False where applicable.

    Related issue: #198

    • [x] Inference testing on pre-made input and gt
      • [x] all models as listed by list_models()
      • [x] Image
      • [x] Text
    • [x] Random test data generator
      • [x] Random image data in PIL format
      • [x] Random text data
    • [x] Determine best way to store and recall test data: ~4.6MB for all models with 1 sample per config
    • [ ] Parallelize tests, unlikely due to RAM constraints

    To generate test data:

    python tests/util_test.py --all
    

    populates the tests/data folder with one torch pt per model config, to be used by the test.

    opened by lopho 39
  • Text Tower Refactor

    Text Tower Refactor

    A refactor of #178 that will keep backwards compat for existing weights/models but provide a different base model for new models using custom text towers...

    opened by rwightman 22
  • How to obtain logits (and probabilities) for 0-shot classification of single classes

    How to obtain logits (and probabilities) for 0-shot classification of single classes

    First of all, thanks for the amazing work going into this repo! In the case where we want to return the probability of the presence of 1 class (e.g. "dog") in a set of images, how would we go about it? While (100.0 * image_features @ text_features.T).softmax(dim=-1) provides well-calibrated probabilities in the multi-class setting, (100.0 * image_features @ text_features.T).sigmoid() does not when we return the logits of only 1 class and have no other classes to compute the softmax against. From logits = np.dot(I_e, T_e.T) * np.exp(t) in Figure 3 of the CLIP paper, it would have to follow that t=4.6... given np.exp(t)=100 from the usage snippet in the README, is this correct (Edit: indeed model.logit_scale confirms this)? And wouldn't this be surprisingly consistent across architectures/training runs? I believe the OpenAI implementation initialises t=1/.07 leading to an initial scaling factor of approximately 14.29. This is then trained of course (link to code). Alternatively, could I try sampling a random, normalised vector as text_feature for a non-existing "non-dog" class and apply (100.0 * image_features @ text_features.T).softmax(dim=-1) as in the multi-class setting? Thanks

    opened by arnaudvl 21
  • add `generate` to coca model

    add `generate` to coca model

    This PR should add the generate method to the CoCa model to add support for generation

    based on https://github.com/lucidrains/x-transformers/blob/main/x_transformers/autoregressive_wrapper.py

    opened by gpucce 16
  • Get well adjusted confidence scores from similarity of CLIP encodings

    Get well adjusted confidence scores from similarity of CLIP encodings

    I am using CLIP to check similarity between text and an image. Now for example I have list of words (objects) I want to check against. For example (“elephant”, “tiger”, “giraffe”).

    By taking the dot product of the encodings I get the similarity value. To evaluate the “confidence” I take the softmax over the outputs and it works very well predicting which class is in the image. But it could happen that the classes are not mutually exclusive. In that case softmax doesn’t make sense. I tried to use sigmoid as it is used with multi-label classification, but it seems to give me values all around 0.55 (so classes that were correct around around 0.56 and classes that are wrong 0.54), so in the example (0.565, 0.55, 0.62) if elephant and giraffe are in the picture. Thus it is hard to set a threshold there.

    I would like to have something like (0.95, 0.05, 0.98) if elefant and giraffe are in the picture, thus the similarity is high for both words.

    Am I thinking too complicated and there is a standard way to do this? Is it even possible to get this well adjusted confidence score?

    opened by justlike-prog 15
  • Loss is constant

    Loss is constant

    I'm using CLIP to train on my custom dataset with the following params:

    Dataset size : 50k image-text pairs Batch size : 128 Image Size : 224 Gpus : 1 Epochs : 500

    It's been running for a while now, I'm on my 15th epoch, and the loss hasn't changed at all. It isn't a constant number, but its constantly at 4.8xxx. Should I be concerned? I'm not sure why this is happening.

    image

    opened by tarunn2799 14
  • AttributeError Open CLIP has no attribute

    AttributeError Open CLIP has no attribute "create_model"

    I am having issues loading CLIP models suddenly, without any change to our system almost 2 months. I am getting the following error:

    AttributeError                            Traceback (most recent call last)
    <ipython-input-4-145ac54c6c43> in <module>
        509 if RN101: print("Downloading CLIP Model: RN101");clip_models.append(clip.load('RN101', jit=False)[0].eval().requiresgrad(False).to(device))
        510 
    --> 511 if ViTB32_laion2b_e16: print("Downloading CLIP Model: ViT-B/32 laion2b_e16"); clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion2b_e16').eval().requiresgrad(False).to(device))
        512 if ViTB32_laion400m_e31: print("Downloading CLIP Model: ViT-B-32 laion400m_e31"); clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion400m_e31').eval().requiresgrad(False).to(device))
        513 if ViTB32_laion400m_32: print("Downloading CLIP Model: ViT-B/32 laion400m_e32"); clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion400m_e32').eval().requiresgrad(False).to(device))
    
    AttributeError: module 'open_clip' has no attribute 'create_model
    

    Has this been documented or seen before?

    To note, this seems to be happening randomly, like the whole module is crashing. One load of a CLIP model is fine, and the next it says there is no create_model method that it previously uses just fine.

    opened by WASasquatch 12
  • Add support for gradient accumulation.

    Add support for gradient accumulation.

    Added a new flag --accum-freq (accumulation frequency) which defaults to 1.

    If this is greater than 1, then the optimizer is only stepped every --accum-freq batches.

    Can be combined with gradient checkpointing.

    Feature was requested in case people only have a few gpus but want to train with large batch.

    We don't have to merge if people think it's makes things too complicated, and can instead close but point to upon request, but at least curious to hear thoughts.

    For per-gpu batch size of m and --acum-freq k the effective per-gpu batch size is mk.

    The basic psuedocode, when --accum-freq > 1 is:

    accum_data, accum_features = [], []
    for i, data in enumerate(dataloader):
      
      opt.zero_grad()
      
      # first, get the features for a bunch of batches without gradient tracking
      with no_grad:
        features = model(data)
      accum_data.append(data)
      accum_features.append(features)
      
      if (i + 1) % accum_freq > 0:
        continue
        
        
      # now re-compute the forward pass for the previous batches, with gradient tracking
      for j, data in enumerate(accum_data):
        features = model(data)
        all_features = cat(accum_features[:j], [features], accum_features[j+1:])
        loss = get_loss(all_features)
        loss.backward()
        
      optimizer.step()
      accum_data, accum_features = [], []
    
    opened by mitchellnw 11
  • Naming clash in new CLIP models

    Naming clash in new CLIP models

    I just cloned this repository on a Windows computer and saw the following:

    PS C:\Users\585491\documents\research> git clone https://github.com/mlfoundations/open_clip.git
    Cloning into 'open_clip'...
    remote: Enumerating objects: 1637, done.
    remote: Counting objects: 100% (74/74), done.
    remote: Compressing objects: 100% (53/53), done.
    remote: Total 1637 (delta 25), reused 49 (delta 17), pack-reused 1563
    Receiving objects: 100% (1637/1637), 8.06 MiB | 10.91 MiB/s, done.
    Resolving deltas: 100% (934/934), done.
    warning: the following paths have collided (e.g. case-sensitive paths
    on a case-insensitive filesystem) and only one from the same
    colliding group is in the working tree:
    
      'src/open_clip/model_configs/ViT-G-14.json'
      'src/open_clip/model_configs/ViT-g-14.json'
      'tests/data/output/ViT-G-14_None_fp32_random_image.pt'
      'tests/data/output/ViT-g-14_None_fp32_random_image.pt'
      'tests/data/output/ViT-G-14_None_fp32_random_text.pt'
      'tests/data/output/ViT-g-14_None_fp32_random_text.pt'
    

    It would be nice if the names could be adjusted to be compliant with case-insensitive file systems.

    opened by StellaAthena 11
  • Cannot reproduce your work

    Cannot reproduce your work

    Hi team, I am trying to reproduce the numbers of your model on the Conceptual Captions dataset and am not able to reproduce the same numbers.

    For example, I obtain a 0.0532 imagenet zeros shot top 1 val accuracy with RN50 config compares to the ~0.2 you report in the chapter of Loss Curves in README as follows. image

    I used dataset of cc12m but only about 3M data pair is got due to limited network. I used one node of 8 V100 GPUs with a batchsize of 150*8=1200 with default params(32 epoch)

    My command is : torchrun --nproc_per_node 8 --nnodes=1 --node_rank=0 -m training.main --train-data '/dataset/cc12m/{00000..00640}.tar' --dataset-type webdataset --batch-size 150 --precision amp --workers 4 --imagenet-val=/dataset/imagenet/val --model RN50 --train-num-samples 3056141 --report-to tensorboard

    Could you please help me to find what's wrong? Maybe it is the small dataset(3M data, 25% of cc12m)? or other hyperparams(like lr)?

    Could you share your other hyperparams you used to obtain those results so that I can run the training with the exact same setup, or better yet share the exact command you used to run the training runs specific to the reported results, I would really appreciate that.

    Thanks a lot.

    opened by CloudRR 9
  • Fix braceexpand memory explosion for complex urls

    Fix braceexpand memory explosion for complex urls

    Currently handling complex webdataset url patterns in args.train_data can lead to a unnecessary memory explosion, when using :: to concatenate multiple data sources. This can be correctly parsed by webdataset, using wds.shardlists.expand_urls(urls)).

    See Issue https://github.com/mlfoundations/open_clip/issues/278.

    opened by gabrielilharco 1
  • Resize embeddings and vocab

    Resize embeddings and vocab

    I finetuned text encoder of CLIP and added some additional tokens to it, I would like to know is there a way to load the checkpoint with greater embedding size?

    opened by ambiSk 0
  • Is there a way to do multi-label classification with CLIP?

    Is there a way to do multi-label classification with CLIP?

    The concrete use case is a as following. I have the classes baby, child, teen, adult. My idea was to use similarity between text and image features (for text features I used the prompt 'there is at least one (c) in the photo', c being one of the 4 classes).

    I went through quite a lot of examples, but I am running into the issue that the similarity scores are often very different for a fixed class or/and classes that appear might have a very similar threshold (like baby and child). For similarity scores I use the cosine similarity multiplied by 2.5 to stretch the score into the interval [0, 1] as is done in the CLIP Score paper.

    Setting a threshold in that sense doesn't seem possible.

    Does anyone have an idea for that? I feel quite stuck here, how I should proceed.

    opened by justlike-prog 1
  • Support for initializing image tower with pretrained weights

    Support for initializing image tower with pretrained weights

    Related to #332.

    I tried to keep the modificatons constrained to factory.py and the configuration to pass. I tested with full initialization with pretrained weights from openai and laion, and also only initializing the image tower accordingly. Review is definitely needed and appreciated.

    opened by Ja1Zhou 0
  • Best practice for supporting initialization from pretrained image tower with custom text tower?

    Best practice for supporting initialization from pretrained image tower with custom text tower?

    An example would be the case described in the Chinese-clip paper. If my understandings are correct, currently this is hard to achieve without downloading and merging separate copies of both towers with custom code. I would like to add this feature and I wonder if I could get some advice if I were to merge this feature into the main branch.

    opened by Ja1Zhou 1
  • Add TextTextCLIP

    Add TextTextCLIP

    This pull request adds TextTextCLIP (CLIP-like text-to-text contrastive retrieval model) to the main branch. It is still a work in progress.

    Tasks

    • [X] Add a config file for TextTextCLIP
    • [X] Add TextTextCLIP in model.py
    • [X] Modify factory.py to load model
    • [X] Modify data.py to load text data
    • [X] Modify `main.py' to train TextTextCLIP
    • [X] Test loading TextTextCLIP
    • [X] Test loading text-pair data.
    • [X] Test dummy training
    • [X] Rename variables
    opened by lingjzhu 0
Releases(v2.9.1)
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
PenguinSpeciesPredictionML - Basic model to predict Penguin species based on beak size and sex.

Penguin Species Prediction (ML) 🐧 👨🏽‍💻 What? 💻 This project is a basic model using sklearn methods to predict Penguin species based on beak size

Tucker Paron 0 Jan 08, 2022
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 1.9.0 ubuntu20/python3.9/pip ubuntu20/python3.8/p

ESPnet 5.9k Jan 04, 2023
Relaxed-machines - explorations in neuro-symbolic differentiable interpreters

Relaxed Machines Explorations in neuro-symbolic differentiable interpreters. Baby steps: inc_stop Libraries JAX Haiku Optax Resources Chapter 3 (∂4: A

Nada Amin 6 Feb 02, 2022
Federated Deep Reinforcement Learning for the Distributed Control of NextG Wireless Networks.

FDRL-PC-Dyspan Federated Deep Reinforcement Learning for the Distributed Control of NextG Wireless Networks. This repository contains the entire code

Peyman Tehrani 17 Nov 18, 2022
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

lolviz By Terence Parr. See Explained.ai for more stuff. A very nice looking javascript lolviz port with improvements by Adnan M.Sagar. A simple Pytho

Terence Parr 785 Dec 30, 2022
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021
A simple pygame dino game which can also be trained and played by a NEAT KI

Dino Game AI Game The game itself was developed with the Pygame module pip install pygame You can also play it yourself by making the dino jump with t

Kilian Kier 7 Dec 05, 2022
Evaluation and Benchmarking of Speech Super-resolution Methods

Speech Super-resolution Evaluation and Benchmarking What this repo do: A toolbox for the evaluation of speech super-resolution algorithms. Unify the e

Haohe Liu (刘濠赫) 84 Dec 20, 2022
A time series processing library

Timeseria Timeseria is a time series processing library which aims at making it easy to handle time series data and to build statistical and machine l

Stefano Alberto Russo 11 Aug 08, 2022
Warning: This project does not have any current developer. See bellow.

Pylearn2: A machine learning research library Warning : This project does not have any current developer. We will continue to review pull requests and

Laboratoire d’Informatique des Systèmes Adaptatifs 2.7k Dec 26, 2022
Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting (HMNet)

Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting (HMNet) Our paper: https://arxiv.org/abs/2111.13324 We will release the complet

15 Oct 17, 2022
A setup script to generate ITK Python Wheels

ITK Python Package This project provides a setup.py script to build ITK Python binary packages and infrastructure to build ITK external module Python

Insight Software Consortium 59 Dec 14, 2022
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
Решения, подсказки, тесты и утилиты для тренировки по алгоритмам от Яндекса.

Решения и подсказки к тренировке по алгоритмам от Яндекса Что есть внутри Решения с подсказками и комментариями; рекомендую сначала смотреть md файл п

Yankovsky Andrey 50 Dec 26, 2022
Stochastic Scene-Aware Motion Prediction

Stochastic Scene-Aware Motion Prediction [Project Page] [Paper] Description This repository contains the training code for MotionNet and GoalNet of SA

Mohamed Hassan 31 Dec 09, 2022
Fine-tune pretrained Convolutional Neural Networks with PyTorch

Fine-tune pretrained Convolutional Neural Networks with PyTorch. Features Gives access to the most popular CNN architectures pretrained on ImageNet. A

Alex Parinov 694 Nov 23, 2022
Diverse graph algorithms implemented using JGraphT library.

# 1. Installing Maven & Pandas First, please install Java (JDK11) and Python 3 if they are not already. Next, make sure that Maven (for importing J

See Woo Lee 3 Dec 17, 2022
Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

Some tentative models that incorporate label propagation to graph neural networks for graph representation learning in nodes, links or graphs.

zshicode 1 Nov 18, 2021