Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

Overview

clip-text-decoder

Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

Example Predictions

Example captions were computed with the pretrained model mentioned below.

"A man riding a wave on top of a surfboard."

A surfer riding a wave

A baseball player is swinging a bat at a ball.

Baseball player

"A dog running across a field with a frisbee."

Dog with frisbee

Installation

Install for easier access to the following objects/classes:

  • clip_text_decoder.datasets.ClipCocoCaptionsDataset
  • clip_text_decoder.models.ClipDecoder
  • clip_text_decoder.models.ClipDecoderInferenceModel
  • clip_text_decoder.tokenizer.Tokenizer

The train.py script will not be available in the installed package, since it's located in the root directory. To train new models, either clone this repository or recreate train.py locally.

Using pip:

pip install clip-text-decoder

From source:

git clone https://github.com/fkodom/clip-text-decoder.git
cd clip-text-decoder
pip install .

NOTE: You'll also need to install openai/CLIP to encode images with CLIP. This is also required by ClipCocoCaptionsDataset to build the captions dataset the first time (cached for subsequent calls).

pip install "clip @ git+https://github.com/openai/CLIP.git"

For technical reasons, the CLIP dependency can't be included in the PyPI package, since it's not an officially published package.

Training

Open In Colab

Launch your own training session using the provided script (train.py):

python train.py --max-epochs 5

Training CLI arguments, along with their default values:

--max-epochs 5  # (int)
--num-layers 6  # (int)
--dim-feedforward 256  # (int)
--precision 16  # (16 or 32)
--seed 0  # (int)

Inference

The training script will produce a model.zip archive, containing the Tokenizer and trained model parameters. To perform inference with it:

import clip
from PIL import Image
import torch

from clip_text_decoder.model import ClipDecoderInferenceModel

device = "cuda" if torch.cuda.is_available() else "cpu"
model = ClipDecoderInferenceModel.load("path/to/model.zip").to(device)
clip_model, clip_preprocessor = clip.load("ViT-B/32", device=device, jit=False)

# Create a blank dummy image
dummy_image = Image.new("RGB", (224, 224))
preprocessed = clip_preprocessor(dummy_image).to(device)
# Add a batch dimension using '.unsqueeze(0)'
encoded = clip_model.encode_image(preprocessed.unsqueeze(0))
text = model(encoded)

print(text)
# Probably some nonsense, because we used a dummy image.

Pretrained Models

A pretrained CLIP decoder is hosted in my Google Drive, and can easily be downloaded by:

from clip_text_decoder.model import ClipDecoderInferenceModel

model = ClipDecoderInferenceModel.download_pretrained()

To cache the pretrained model locally, so that it's not re-downloaded each time:

model = ClipDecoderInferenceModel.download_pretrained("/path/to/model.zip")

Shortcomings

  • Only works well with COCO-style images. If you go outside the distribution of COCO objects, you'll get nonsense text captions.
  • Relatively short training time. Even within the COCO domain, you'll occasionally see incorrect captions. Quite a few captions will have bad grammar, repetitive descriptors, etc.
Comments
  • Decoding Text Embeddings Coded Using Hugging Face ClipTextModel

    Decoding Text Embeddings Coded Using Hugging Face ClipTextModel

    Suppose that I have text embeddings created using Hugging Face's ClipTextModel using the following method:

    import torch
    from transformers import CLIPTokenizer, CLIPTextModel
    
    class_list = ["i love going home and playing with my wife and kids", "i love going home", "playing with my wife and kids", 
    "family", "war", "writing"]
    
    model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
    tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
    
    inputs = tokenizer(class_list, padding=True, return_tensors="pt")
    outputs = model(**inputs)
    hidden_state = outputs.last_hidden_state
    embeddings = outputs.pooler_output
    

    Questions:

    1. Is It possible to use the clip-text-decoder to convert the embeddings back to text?
    2. If it is indeed possible to do so, could you provide an example of how?

    Looking forward to receiving your feedback.

    opened by mbdzi 6
  • Fix string error when loading clip models.

    Fix string error when loading clip models.

    error

    The model name string ( VIT-xxx ) in the check_vision_backbone function is not compatible with the model name string ( ViT-xxx ) of the clip repository, which will cause at least one error in check_vision_backbone function or when loading the clip model.

    solution

    In this PR, the model name string in the check_vision_backbone function is modified to ViT-xxx to make it compatible with the clip repository.

    opened by Adenialzz 1
  • BLIP vision backbone

    BLIP vision backbone

    • Added blip backbone; still cleaning up last pieces
    • Bug fixes for training script, and remove debug code.
    • Fix dependencies in test workflow; update README statistics
    • Fix test issue with CUDA device
    • Update unit tests for newer Python, torch versions
    • Test up to Python 3.10
    • Test up to Python 3.9
    • Install lavis first
    opened by fkodom 0
  • Feature: Beam Search

    Feature: Beam Search

    • Add beam search, clip dependency to setup.py
    • Fix installation instructions
    • Remove main clause
    • Add '--beam-size' option to 'train.py' script.
    • Update README; propagate the '--beam-size' arg through eval functions
    • Update setup.cfg, add pre-commit hooks
    • Reformat images
    • Remove fixed image width
    • Add detail to README; comments to call method for beam search
    • Updated README headline
    opened by fkodom 0
  • Bug Fixes for Broken Tests

    Bug Fixes for Broken Tests

    • Cache the old fashioned way :)
    • Fix silly typo in test for image caption model
    • Apply black and isort formatting
    • Install latest version of 'black', reapply formatting
    • Fix flake8 issue (duplicate function definition), and install latest patch version of pytorch for tests.
    • Skip slow tests by default, add 'slow' marker to inference model tests.
    opened by fkodom 0
  • GPT2 Decoder

    GPT2 Decoder

    • Update model to use DistilGPT2 as a pre-trained decoder.
    • Removed tokenizer (no longer used), fixed bugs in Model source file, and updated model unit tests.
    • Backwards compatibility for 'gdown.download' method.
    • Update installation requirements, caption examples in README
    opened by fkodom 0
  • Upgrade CodeSee workflow to version 2

    Upgrade CodeSee workflow to version 2

    CodeSee is a code visibility platform.

    This change updates the CodeSee workflow file to the latest version for security, maintenance, and support improvements (see changelog below).

    That workflow file:

    • runs CodeSee's code analysis on every PR push and merge
    • uploads that analysis to CodeSee.
    • It does not transmit your code.

    The code analysis is used to generate maps and insights about this codebase.

    CodeSee workflow changelog:

    • Improved security: Updates permission to be read-only.
    • Improved future maintenance: Replaces the body of the workflow with a single github action: codesee-action. This makes it significantly easier for CodeSee to introduce future improvements and fixes without requiring another PR like this.
    • Improved Python support: The action now properly supports Python 3.11, and will continue to support new Python versions as they are released.
    opened by codesee-maps[bot] 1
  • Incompatible checksum error

    Incompatible checksum error

    I see the following error when trying to load the pretrained model.

        tokenizer=pickle.loads(tokenizer_buffer.read()),
      File "stringsource", line 6, in spacy.pipeline.trainable_pipe.__pyx_unpickle_TrainablePipe
    _pickle.PickleError: Incompatible checksums (102742709 vs 0x417ddeb = (cfg, model, name, vocab))
    

    Am I missing something?

    opened by dapurv5 0
Releases(1.4.4)
  • 1.4.4(Nov 7, 2022)

    What's Changed

    • Fix string error when loading clip models. by @Adenialzz in https://github.com/fkodom/clip-text-decoder/pull/12

    New Contributors

    • @Adenialzz made their first contribution in https://github.com/fkodom/clip-text-decoder/pull/12

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.3...1.4.4

    Source code(tar.gz)
    Source code(zip)
  • 1.4.3(Nov 7, 2022)

    What's Changed

    • Refactor Dataset by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/11

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.2...1.4.3

    Source code(tar.gz)
    Source code(zip)
  • 1.4.2(Oct 26, 2022)

    What's Changed

    • Huggingface Evaluate by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/9

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.1...1.4.2

    Source code(tar.gz)
    Source code(zip)
  • 1.4.1(Oct 26, 2022)

    What's Changed

    • Datapipes by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/8

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.4.0...1.4.1

    Source code(tar.gz)
    Source code(zip)
  • 1.4.0(Oct 23, 2022)

    What's Changed

    • BLIP vision backbone by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/7

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.3.0...1.4.0

    Source code(tar.gz)
    Source code(zip)
  • 1.3.0(Oct 2, 2022)

    What's Changed

    • Feature: Beam Search by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/5
    • Bug Fix: PyPI Release by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/6

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.2.0...1.3.0

    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jan 29, 2022)

    What's Changed

    • Cache CLIP embeddings for the dataset, rather than recomputing them each time.

    • Reduce model file sizes by storing at lower precision

    • Add an ImageCaptionInferenceModel class for easier out-of-the-box use

    • Fix some broken unit tests

    • Better Data Caching by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/3

    • Bug Fixes for Broken Tests by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/4

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.1.0...1.2.0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Dec 22, 2021)

    What's Changed

    • GPT2 Decoder by @fkodom in https://github.com/fkodom/clip-text-decoder/pull/2

    New Contributors

    • @fkodom made their first contribution in https://github.com/fkodom/clip-text-decoder/pull/2

    Full Changelog: https://github.com/fkodom/clip-text-decoder/compare/1.0.0...1.1.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Nov 14, 2021)

  • 0.1.0(Nov 14, 2021)

Owner
Frank Odom
Director of Innovation at Plainsight. I like neural nets, and neural nets like me.
Frank Odom
Hyper-parameter optimization for sklearn

hyperopt-sklearn Hyperopt-sklearn is Hyperopt-based model selection among machine learning algorithms in scikit-learn. See how to use hyperopt-sklearn

1.4k Jan 01, 2023
LightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation

LightNet++ !!!New Repo.!!! ⇒ EfficientNet.PyTorch: Concise, Modular, Human-friendly PyTorch implementation of EfficientNet with Pre-trained Weights !!

linksense 237 Jan 05, 2023
Auto-Lama combines object detection and image inpainting to automate object removals

Auto-Lama Auto-Lama combines object detection and image inpainting to automate object removals. It is build on top of DE:TR from Facebook Research and

44 Dec 09, 2022
使用yolov5训练自己数据集(详细过程)并通过flask部署

使用yolov5训练自己的数据集(详细过程)并通过flask部署 依赖库 torch torchvision numpy opencv-python lxml tqdm flask pillow tensorboard matplotlib pycocotools Windows,请使用 pycoc

HB.com 19 Dec 28, 2022
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
The personal repository of the work: *DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer*.

DanceNet3D The personal repository of the work: DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer. Dataset and Results Pleas

南嘉Nanga 36 Dec 21, 2022
The code of paper "Block Modeling-Guided Graph Convolutional Neural Networks".

Block Modeling-Guided Graph Convolutional Neural Networks This repository contains the demo code of the paper: Block Modeling-Guided Graph Convolution

22 Dec 08, 2022
[CoRL 2021] A robotics benchmark for cross-embodiment imitation.

x-magical x-magical is a benchmark extension of MAGICAL specifically geared towards cross-embodiment imitation. The tasks still provide the Demo/Test

Kevin Zakka 36 Nov 26, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 06, 2022
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark

Introduction English | 简体中文 MMHuman3D is an open source PyTorch-based codebase for the use of 3D human parametric models in computer vision and comput

OpenMMLab 782 Jan 04, 2023
shufflev2-yolov5:lighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size

pogg 1.5k Jan 05, 2023
Hierarchical Metadata-Aware Document Categorization under Weak Supervision (WSDM'21)

Hierarchical Metadata-Aware Document Categorization under Weak Supervision This project provides a weakly supervised framework for hierarchical metada

Yu Zhang 53 Sep 17, 2022
A visualisation tool for Deep Reinforcement Learning

DRLVIS - Visualising Deep Reinforcement Learning Created by Marios Sirtmatsis with the support of Alex Bäuerle. DRLVis is an application used for visu

Marios Sirtmatsis 1 Nov 04, 2021
Semi-SDP Semi-supervised parser for semantic dependency parsing.

Semi-SDP Semi-supervised parser for semantic dependency parsing. This repo contains the code used for the semi-supervised semantic dependency parser i

12 Sep 17, 2021
Retrieval.pytorch - The code we used in [2020 DIGIX]

Retrieval.pytorch - The code we used in [2020 DIGIX]

Guo-Hua Wang 2 Feb 07, 2022
Code for our paper "Interactive Analysis of CNN Robustness"

Perturber Code for our paper "Interactive Analysis of CNN Robustness" Datasets Feature visualizations: Google Drive Fine-tuning checkpoints as saved m

Stefan Sietzen 0 Aug 17, 2021
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
【CVPR 2021, Variational Inference Framework, PyTorch】 From Rain Generation to Rain Removal

From Rain Generation to Rain Removal (CVPR2021) Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng [PDF&&Supplementary Material]

Hong Wang 48 Nov 23, 2022