GeDML is an easy-to-use generalized deep metric learning library

Overview

Logo

Documentation build

News

  • [2021-9-6]: v0.0.0 has been released.

Introduction

GeDML is an easy-to-use generalized deep metric learning library, which contains:

  • State-of-the-art DML algorithms: We contrain 18+ losses functions and 6+ sampling strategies, and divide these algorithms into three categories (i.e., collectors, selectors, and losses).
  • Bridge bewteen DML and SSL: We attempt to bridge the gap between deep metric learning and self-supervised learning through specially designed modules, such as collectors.
  • Auxiliary modules to assist in building: We also encapsulates the upper interface for users to start programs quickly and separates the codes and configs for managing hyper-parameters conveniently.

Installation

Pip

pip install gedml

Framework

This project is modular in design. The pipeline diagram is as follows:

Pipeline

Code structure

  • _debug: Debug files.
  • demo: Demos of configuration files.
  • docs: Documentation.
  • src: Source code.
    • core: Losses, selectors, collectors, etc.
    • client: Tmux manager.
    • config: Config files including link, convert, assert and params.
    • launcher: Manager, Trainer, Tester, etc.
    • recorder: Recorder.

Method

Collectors

method description
BaseCollector Base class
DefaultCollector Do nothing
ProxyCollector Maintain a set of proxies
MoCoCollector paper: Momentum Contrast for Unsupervised Visual Representation Learning
SimSiamCollector paper: Exploring Simple Siamese Representation Learning
HDMLCollector paper: Hardness-Aware Deep Metric Learning
DAMLCollector paper: Deep Adversarial Metric Learning
DVMLCollector paper: Deep Variational Metric Learning

Losses

classifier-based

method description
CrossEntropyLoss Cross entropy loss for unsupervised methods
LargeMarginSoftmaxLoss paper: Large-Margin Softmax Loss for Convolutional Neural Networks
ArcFaceLoss paper: ArcFace: Additive Angular Margin Loss for Deep Face Recognition
CosFaceLoss paper: CosFace: Large Margin Cosine Loss for Deep Face Recognition

pair-based

method description
ContrastiveLoss paper: Learning a Similarity Metric Discriminatively, with Application to Face Verification
MarginLoss paper: Sampling Matters in Deep Embedding Learning
TripletLoss paper: Learning local feature descriptors with triplets and shallow convolutional neural networks
AngularLoss paper: Deep Metric Learning with Angular Loss
CircleLoss paper: Circle Loss: A Unified Perspective of Pair Similarity Optimization
FastAPLoss paper: Deep Metric Learning to Rank
LiftedStructureLoss paper: Deep Metric Learning via Lifted Structured Feature Embedding
MultiSimilarityLoss paper: Multi-Similarity Loss With General Pair Weighting for Deep Metric Learning
NPairLoss paper: Improved Deep Metric Learning with Multi-class N-pair Loss Objective
SignalToNoiseRatioLoss paper: Signal-To-Noise Ratio: A Robust Distance Metric for Deep Metric Learning
PosPairLoss paper: Exploring Simple Siamese Representation Learning

proxy-based

method description
ProxyLoss paper: No Fuss Distance Metric Learning Using Proxies
ProxyAnchorLoss paper: Proxy Anchor Loss for Deep Metric Learning
SoftTripleLoss paper: SoftTriple Loss: Deep Metric Learning Without Triplet Sampling

Selectors

method description
BaseSelector Base class
DefaultSelector Do nothing
DenseTripletSelector Select all triples
DensePairSelector Select all pairs

Quickstart

Please set the environment variable WORKSPACE first to indicate where to manage your project.

Initialization

Use ConfigHandler to create all objects.

config_handler = ConfigHandler()
config_handler.get_params_dict()
objects_dict = config_handler.create_all()

Start

Use manager to automatically call trainer and tester.

manager = utils.get_default(objects_dict, "managers")
manager.run()

Directly use trainer and tester.

trainer = utils.get_default(objects_dict, "trainers")
tester = utils.get_default(objects_dict, "testers")
recorder = utils.get_default(objects_dict, "recorders")

# start to train
utils.func_params_mediator(
    [objects_dict],
    trainer.__call__
)

# start to test
metrics = utils.func_params_mediator(
    [
        {"recorders": recorder},
        objects_dict,
    ],
    tester.__call__
)

Document

For more information, please refer to:

📖 👉 Docs

Some specific guidances:

Configs

We will continually update the optimal parameters of different configs in TsinghuaCloud

Code Reference

TODO:

  • assert parameters
  • distributed methods and Non-distributed methods!!!
  • write github action to automate unit-test, package publish and docs building.
  • add cross-validation splits protocol.
Owner
Borui Zhang
I am a first year Ph.D student in the Department of Automation at THU. My research direction is computer vision.
Borui Zhang
A lightweight tool to get an AI Infrastructure Stack up in minutes not days.

K3ai will take care of setup K8s for You, deploy the AI tool of your choice and even run your code on it.

k3ai 105 Dec 04, 2022
OHLC Average Prediction of Apple Inc. Using LSTM Recurrent Neural Network

Stock Price Prediction of Apple Inc. Using Recurrent Neural Network OHLC Average Prediction of Apple Inc. Using LSTM Recurrent Neural Network Dataset:

Nouroz Rahman 410 Jan 05, 2023
Stock-Prediction - prediction of stock market movements using sentiment analysis and deep learning.

Stock-Prediction- In this project, we aim to enhance the prediction of stock market movements using sentiment analysis and deep learning. We divide th

5 Jan 25, 2022
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
Joint project of the duo Hacker Ninjas

Project Smoothie Společný projekt dua Hacker Ninjas. První pokus o hříčku po třech týdnech učení se programování. Jakub Kolář e:\

Jakub Kolář 2 Jan 07, 2022
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
This repo is the code release of EMNLP 2021 conference paper "Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories".

Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories This repo is the code release of EMNLP 2021 con

12 Nov 22, 2022
mmdetection version of TinyBenchmark.

introduction This project is an mmdetection version of TinyBenchmark. TODO list: add TinyPerson dataset and evaluation add crop and merge for image du

34 Aug 27, 2022
Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency This is a official implementation of the CycleContrast introduced in

13 Nov 14, 2022
CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation

CDGAN CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation CDGAN Implementation in PyTorch This is the imple

Kancharagunta Kishan Babu 6 Apr 19, 2022
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 01, 2023
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022
Code release for "Making a Bird AI Expert Work for You and Me".

Making-a-Bird-AI-Expert-Work-for-You-and-Me Code release for "Making a Bird AI Expert Work for You and Me". arxiv (Coming soon...) Changelog 2021/12/6

PRIS-CV: Computer Vision Group 11 Dec 11, 2022
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
All-in-one Docker container that allows a user to explore Nautobot in a lab environment.

Nautobot Lab This container is not for production use! Nautobot Lab is an all-in-one Docker container that allows a user to quickly get an instance of

Nautobot 29 Sep 16, 2022