Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

Overview

clip-text-decoder

Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

Example Predictions

Example captions were computed with the pretrained model mentioned below.

"A man riding a wave on top of a surfboard."

A surfer riding a wave

A baseball player is swinging a bat at a ball.

Baseball player

"A dog running across a field with a frisbee."

Dog with frisbee

Installation

Install for easier access to the following objects/classes:

  • clip_text_decoder.datasets.ClipCocoCaptionsDataset
  • clip_text_decoder.models.ClipDecoder
  • clip_text_decoder.models.ClipDecoderInferenceModel
  • clip_text_decoder.tokenizer.Tokenizer

The train.py script will not be available in the installed package, since it's located in the root directory. To train new models, either clone this repository or recreate train.py locally.

Using pip:

pip install clip-text-decoder

From source:

git clone https://github.com/fkodom/clip-text-decoder.git
cd clip-text-decoder
pip install .

NOTE: You'll also need to install openai/CLIP to encode images with CLIP. This is also required by ClipCocoCaptionsDataset to build the captions dataset the first time (cached for subsequent calls).

pip install "clip @ git+https://github.com/openai/CLIP.git"

For technical reasons, the CLIP dependency can't be included in the PyPI package, since it's not an officially published package.

Training

Open In Colab

Launch your own training session using the provided script (train.py):

python train.py --max-epochs 5

Training CLI arguments, along with their default values:

--max-epochs 5  # (int)
--num-layers 6  # (int)
--dim-feedforward 256  # (int)
--precision 16  # (16 or 32)
--seed 0  # (int)

Inference

The training script will produce a model.zip archive, containing the Tokenizer and trained model parameters. To perform inference with it:

import clip
from PIL import Image
import torch

from clip_text_decoder.model import ClipDecoderInferenceModel

device = "cuda" if torch.cuda.is_available() else "cpu"
model = ClipDecoderInferenceModel.load("path/to/model.zip").to(device)
clip_model, clip_preprocessor = clip.load("ViT-B/32", device=device, jit=False)

# Create a blank dummy image
dummy_image = Image.new("RGB", (224, 224))
preprocessed = clip_preprocessor(dummy_image).to(device)
# Add a batch dimension using '.unsqueeze(0)'
encoded = clip_model.encode_image(preprocessed.unsqueeze(0))
text = model(encoded)

print(text)
# Probably some nonsense, because we used a dummy image.

Pretrained Models

A pretrained CLIP decoder is hosted in my Google Drive, and can easily be downloaded by:

from clip_text_decoder.model import ClipDecoderInferenceModel

model = ClipDecoderInferenceModel.download_pretrained()

To cache the pretrained model locally, so that it's not re-downloaded each time:

model = ClipDecoderInferenceModel.download_pretrained("/path/to/model.zip")

Shortcomings

  • Only works well with COCO-style images. If you go outside the distribution of COCO objects, you'll get nonsense text captions.
  • Relatively short training time. Even within the COCO domain, you'll occasionally see incorrect captions. Quite a few captions will have bad grammar, repetitive descriptors, etc.
Comments
  • Decoding Text Embeddings Coded Using Hugging Face ClipTextModel

    Decoding Text Embeddings Coded Using Hugging Face ClipTextModel

    Suppose that I have text embeddings created using Hugging Face's ClipTextModel using the following method:

    import torch
    from transformers import CLIPTokenizer, CLIPTextModel
    
    class_list = ["i love going home and playing with my wife and kids", "i love going home", "playing with my wife and kids", 
    "family", "war", "writing"]
    
    model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
    tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
    
    inputs = tokenizer(class_list, padding=True, return_tensors="pt")
    outputs = model(**inputs)
    hidden_state = outputs.last_hidden_state
    embeddings = outputs.pooler_output
    

    Questions:

    1. Is It possible to use the clip-text-decoder to convert the embeddings back to text?
    2. If it is indeed possible to do so, could you provide an example of how?

    Looking forward to receiving your feedback.

    opened by mbdzi 6
  • Fix string error when loading clip models.

    Fix string error when loading clip models.

    error

    The model name string ( VIT-xxx ) in the check_vision_backbone function is not compatible with the model name string ( ViT-xxx ) of the clip repository, which will cause at least one error in check_vision_backbone function or when loading the clip model.

    solution

    In this PR, the model name string in the check_vision_backbone function is modified to ViT-xxx to make it compatible with the clip repository.

    opened by Adenialzz 1
  • BLIP vision backbone

    BLIP vision backbone

    • Added blip backbone; still cleaning up last pieces
    • Bug fixes for training script, and remove debug code.
    • Fix dependencies in test workflow; update README statistics
    • Fix test issue with CUDA device
    • Update unit tests for newer Python, torch versions
    • Test up to Python 3.10
    • Test up to Python 3.9
    • Install lavis first
    opened by fkodom 0
  • Feature: Beam Search

    Feature: Beam Search

    • Add beam search, clip dependency to setup.py
    • Fix installation instructions
    • Remove main clause
    • Add '--beam-size' option to 'train.py' script.
    • Update README; propagate the '--beam-size' arg through eval functions
    • Update setup.cfg, add pre-commit hooks
    • Reformat images
    • Remove fixed image width
    • Add detail to README; comments to call method for beam search
    • Updated README headline
    opened by fkodom 0
  • Bug Fixes for Broken Tests

    Bug Fixes for Broken Tests

    • Cache the old fashioned way :)
    • Fix silly typo in test for image caption model
    • Apply black and isort formatting
    • Install latest version of 'black', reapply formatting
    • Fix flake8 issue (duplicate function definition), and install latest patch version of pytorch for tests.
    • Skip slow tests by default, add 'slow' marker to inference model tests.
    opened by fkodom 0
  • GPT2 Decoder

    GPT2 Decoder

    • Update model to use DistilGPT2 as a pre-trained decoder.
    • Removed tokenizer (no longer used), fixed bugs in Model source file, and updated model unit tests.
    • Backwards compatibility for 'gdown.download' method.
    • Update installation requirements, caption examples in README
    opened by fkodom 0
  • Upgrade CodeSee workflow to version 2

    Upgrade CodeSee workflow to version 2

    CodeSee is a code visibility platform.

    This change updates the CodeSee workflow file to the latest version for security, maintenance, and support improvements (see changelog below).

    That workflow file:

    • runs CodeSee's code analysis on every PR push and merge
    • uploads that analysis to CodeSee.
    • It does not transmit your code.

    The code analysis is used to generate maps and insights about this codebase.

    CodeSee workflow changelog:

    • Improved security: Updates permission to be read-only.
    • Improved future maintenance: Replaces the body of the workflow with a single github action: codesee-action. This makes it significantly easier for CodeSee to introduce future improvements and fixes without requiring another PR like this.
    • Improved Python support: The action now properly supports Python 3.11, and will continue to support new Python versions as they are released.
    opened by codesee-maps[bot] 1
  • Incompatible checksum error

    Incompatible checksum error

    I see the following error when trying to load the pretrained model.

        tokenizer=pickle.loads(tokenizer_buffer.read()),
      File "stringsource", line 6, in spacy.pipeline.trainable_pipe.__pyx_unpickle_TrainablePipe
    _pickle.PickleError: Incompatible checksums (102742709 vs 0x417ddeb = (cfg, model, name, vocab))
    

    Am I missing something?

    opened by dapurv5 0
Releases(1.4.4)
  • 1.4.4(Nov 7, 2022)

    What's Changed

    • Fix string error when loading clip models. by @Adenialzz in https://github.com/fkodom/clip-text-decoder/pull/12

    New Contributors

    • @Adenialzz made their first contribution in https://github.com/fkodom/clip-text-decoder/pull/12

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.3...1.4.4

    Source code(tar.gz)
    Source code(zip)
  • 1.4.3(Nov 7, 2022)

    What's Changed

    • Refactor Dataset by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/11

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.2...1.4.3

    Source code(tar.gz)
    Source code(zip)
  • 1.4.2(Oct 26, 2022)

    What's Changed

    • Huggingface Evaluate by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/9

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.1...1.4.2

    Source code(tar.gz)
    Source code(zip)
  • 1.4.1(Oct 26, 2022)

    What's Changed

    • Datapipes by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/8

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.0...1.4.1

    Source code(tar.gz)
    Source code(zip)
  • 1.4.0(Oct 23, 2022)

    What's Changed

    • BLIP vision backbone by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/7

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.3.0...1.4.0

    Source code(tar.gz)
    Source code(zip)
  • 1.3.0(Oct 2, 2022)

    What's Changed

    • Feature: Beam Search by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/5
    • Bug Fix: PyPI Release by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/6

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.2.0...1.3.0

    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jan 29, 2022)

    What's Changed

    • Cache CLIP embeddings for the dataset, rather than recomputing them each time.

    • Reduce model file sizes by storing at lower precision

    • Add an ImageCaptionInferenceModel class for easier out-of-the-box use

    • Fix some broken unit tests

    • Better Data Caching by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/3

    • Bug Fixes for Broken Tests by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/4

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.1.0...1.2.0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Dec 22, 2021)

    What's Changed

    • GPT2 Decoder by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/2

    New Contributors

    • @fkodom made their first contribution in https://github.com/fkodom/clip-text-decoder/pull/2

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.0.0...1.1.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Nov 14, 2021)

  • 0.1.0(Nov 14, 2021)

Owner
Frank Odom
Director of Innovation at Plainsight. I like neural nets, and neural nets like me.
Frank Odom
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Johannes von Lindheim 3 Oct 29, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

381 Dec 30, 2022
Keywords : Streamlit, BertTokenizer, BertForMaskedLM, Pytorch

Next Word Prediction Keywords : Streamlit, BertTokenizer, BertForMaskedLM, Pytorch 🎬 Project Demo ✔ Application is hosted on Streamlit. You can see t

Vivek7 3 Aug 26, 2022
WatermarkRemoval-WDNet-WACV2021

WatermarkRemoval-WDNet-WACV2021 Thank you for your attention. Citation Please cite the related works in your publications if it helps your research: @

LUYI 63 Dec 05, 2022
3D Generative Adversarial Network

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling This repository contains pre-trained models and sampling

Chengkai Zhang 791 Dec 20, 2022
A `Neural = Symbolic` framework for sound and complete weighted real-value logic

Logical Neural Networks LNNs are a novel Neuro = symbolic framework designed to seamlessly provide key properties of both neural nets (learning) and s

International Business Machines 138 Dec 19, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on

Su Pang 254 Dec 16, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
Analysis of Antarctica sequencing samples contaminated with SARS-CoV-2

Analysis of SARS-CoV-2 reads in sequencing of 2018-2019 Antarctica samples in PRJNA692319 The samples analyzed here are described in this preprint, wh

Jesse Bloom 4 Feb 09, 2022
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 07, 2023
🧑‍🔬 verify your TEAL program by experiment and observation

Graviton - Testing TEAL with Dry Runs Tutorial Local Installation The following instructions assume that you have make available in your local environ

Algorand 18 Jan 03, 2023
Hysterese plugin with two temperature offset areas

craftbeerpi4 plugin OffsetHysterese Temperatur-Steuerungs-Plugin mit zwei tempereaturbereich abhängigen Offsets. Installation sudo pip3 install https:

HappyHibo 1 Dec 21, 2021