This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Overview

Deep Continuous Clustering

Introduction

This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper):

Sohil Atul Shah and Vladlen Koltun. Deep Continuous Clustering.

If you use this code in your research, please cite our paper.

@article{shah2018DCC,
	author    = {Sohil Atul Shah and Vladlen Koltun},
	title     = {Deep Continuous Clustering},
	journal   = {arXiv:1803.01449},
	year      = {2018},
}

The source code and dataset are published under the MIT license. See LICENSE for details. In general, you can use the code for any purpose with proper attribution. If you do something interesting with the code, we'll be happy to know. Feel free to contact us.

Requirement

Pretraining SDAE

Note: Please find required files and checkpoints for MNIST dataset shared here.

Please create new folder for each dataset under the data folder. Please follow the structure of mnist dataset. The training and the validation data for each dataset must be placed under their respective folder.

We have already provided train and test data files for MNIST dataset. For example, one can start pretraining of SDAE from console as follows:

$ python pretraining.py --data mnist --tensorboard --id 1 --niter 50000 --lr 10 --step 20000

Different settings for total iterations, learning rate and stepsize may be required for other datasets. Please find the details under the comment section inside the pretraining file.

Extracting Pretrained Features

The features from the pretrained SDAE network are extracted as follows:

$ python extract_feature.py --data mnist --net checkpoint_4.pth.tar --features pretrained

By default, the model checkpoint for pretrained SDAE NW is stored under results.

Copying mkNN graph

The copyGraph program is used to merge the preprocessed mkNN graph (using the code provided by RCC) and the extracted pretrained features. Note the mkNN graph is built on the original and not on the SDAE features.

$ python copyGraph.py --data mnist --graph pretrained.mat --features pretrained.pkl --out pretrained

The above command assumes that the graph is stored in the pretrained.mat file and the merged file is stored back to pretrained.mat file.

DCC searches for the file with name pretrained.mat. Hence please retain the name.

Running Deep Continuous Clustering

Once the features are extracted and graph details merged, one can start training DCC algorithm.

For sanity check, we have also provided a pretrained.mat and SDAE model files for the MNIST dataset located under the data folder. For example, one can run DCC on MNIST from console as follows:

$ python DCC.py --data mnist --net checkpoint_4.pth.tar --tensorboard --id 1

The other preprocessed graph files can be found in gdrive folder as provided by the RCC.

Evaluation

Towards the end of run of DCC algorithm, i.e., once the stopping criterion is met, DCC starts evaluating the cluster assignment for the total dataset. The evaluation output is logged into tensorboard logger. The penultimate evaluated output is reported in the paper.

Like RCC, the AMI definition followed here differs slightly from the default definition found in the sklearn package. To match the results listed in the paper, please modify it accordingly.

The tensorboard logs for both pretraining and DCC will be stored in the "runs/DCC" folder under results. The final embedded features 'U' and cluster assignment for each sample is saved in 'features.mat' file under results.

Creating input

The input file for SDAE pretraining, traindata.mat and testdata.mat, stores the features of the 'N' data samples in a matrix format N x D. We followed 4:1 ratio to split train and validation data. The provided make_data.py can be used to build training and validation data. The distinction of training and validation set is used only for the pretraining stage. For end-to-end training, there is no such distinction in unsupervised learning and hence all data has been used.

To construct mkNN edge set and to create preprocessed input file, pretrained.mat, from the raw feature file, use edgeConstruction.py released by RCC. Please follow the instruction therein. Note that mkNN graph is built on the complete dataset. For simplicity, code (post pretraining phase) follows the data ordering of [trainset, testset] to arrange the data. This should be consistent even with mkNN construction.

Understanding Steps Through Visual Example

Generate 2D clustered data with

python make_data.py --data easy

This creates 3 clusters where the centers are colinear to each other. We would then expect to only need 1 dimensional latent space (either x or y) to uniquely project the data onto the line passing through the center of the clusters.

generated ground truth

Construct mKNN graph with

python edgeConstruction.py --dataset easy --samples 600

Pretrain SDAE with

python pretraining.py --data easy --tensorboard --id 1 --niter 500 --dim 1 --lr 0.0001 --step 300

You can debug the pretraining losses using tensorboard (needs tensorflow) with

tensorboard --logdir data/easy/results/runs/pretraining/1/

Then navigate to the http link that is logged in console.

Extract pretrained features

python extract_feature.py --data easy --net checkpoint_2.pth.tar --features pretrained --dim 1

Merge preprocessed mkNN graph and the pretrained features with

python copyGraph.py --data easy --graph pretrained.mat --features pretrained.pkl --out pretrained

Run DCC with

python DCC.py --data easy --net checkpoint_2.pth.tar --tensorboard --id 1 --dim 1

Debug and show how the representatives shift over epochs with

tensorboard --logdir data/easy/results/runs/DCC/1/ --samples_per_plugin images=100

Pretraining and DCC together in one script

See easy_example.py for the previous easy to visualize example all steps done in one script. Execute the script to perform the previous section all together. You can visualize the results, such as how the representatives drift over iterations with the tensorboard command above and navigating to the Images tab.

With an autoencoder, the representatives shift over epochs like: shift with autoencoder

Owner
Sohil Shah
Research Scientist
Sohil Shah
Download & Install mods for your favorit game with a few simple clicks

Husko's SteamWorkshop Downloader 🔴 IMPORTANT ❗ 🔴 The Tool is currently being rewritten so updates will be slow and only on the dev branch until it i

Husko 67 Nov 25, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
Establishing Strong Baselines for TripClick Health Retrieval; ECIR 2022

TripClick Baselines with Improved Training Data Welcome 🙌 to the hub-repo of our paper: Establishing Strong Baselines for TripClick Health Retrieval

Sebastian Hofstätter 3 Nov 03, 2022
Stochastic Scene-Aware Motion Prediction

Stochastic Scene-Aware Motion Prediction [Project Page] [Paper] Description This repository contains the training code for MotionNet and GoalNet of SA

Mohamed Hassan 31 Dec 09, 2022
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

Yandex 662 Nov 20, 2022
LinkNet - This repository contains our Torch7 implementation of the network developed by us at e-Lab.

LinkNet This repository contains our Torch7 implementation of the network developed by us at e-Lab. You can go to our blogpost or read the article Lin

e-Lab 158 Nov 11, 2022
NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions

NeoDTI NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions (Bioinformatics).

62 Nov 26, 2022
Аналитика доходности инвестиционного портфеля в Тинькофф брокере

Аналитика доходности инвестиционного портфеля Тиньков Видео на YouTube Для работы скрипта нужно установить три переменных окружения: export TINKOFF_TO

Alexey Goloburdin 64 Dec 17, 2022
Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are publi

Shunta Saito 255 Sep 07, 2022
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 01, 2023
Team Enigma at ArgMining 2021 Shared Task: Leveraging Pretrained Language Models for Key Point Matching

Team Enigma at ArgMining 2021 Shared Task: Leveraging Pretrained Language Models for Key Point Matching This is our attempt of the shared task on Quan

Manav Nitin Kapadnis 12 Jul 08, 2022
Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021

LoFTR: Detector-Free Local Feature Matching with Transformers Project Page | Paper LoFTR: Detector-Free Local Feature Matching with Transformers Jiami

ZJU3DV 1.4k Jan 04, 2023
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)

Residual Dense Network for Image Super-Resolution This repository is for RDN introduced in the following paper Yulun Zhang, Yapeng Tian, Yu Kong, Bine

Yulun Zhang 494 Dec 30, 2022
Unofficial reimplementation of ECAPA-TDNN for speaker recognition (EER=0.86 for Vox1_O when train only in Vox2)

Introduction This repository contains my unofficial reimplementation of the standard ECAPA-TDNN, which is the speaker recognition in VoxCeleb2 dataset

Tao Ruijie 277 Dec 31, 2022
An Industrial Grade Federated Learning Framework

DOC | Quick Start | 中文 FATE (Federated AI Technology Enabler) is an open-source project initiated by Webank's AI Department to provide a secure comput

Federated AI Ecosystem 4.8k Jan 09, 2023
Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation

Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation (CVPR2019) This is a pytorch implementatio

Yawei Luo 280 Jan 01, 2023
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

István Sárándi 79 Dec 30, 2022
Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Implementing Graph Convolutional Networks and Information Retrieval Mechanisms using pure Python and NumPy

Noah Getz 3 Jun 22, 2022
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation

SelectionGAN for Guided Image-to-Image Translation CVPR Paper | Extended Paper | Guided-I2I-Translation-Papers Citation If you use this code for your

Hao Tang 424 Dec 02, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022